html_url
stringlengths
48
51
title
stringlengths
5
155
comments
stringlengths
63
15.7k
body
stringlengths
0
17.7k
comment_length
int64
16
949
text
stringlengths
164
23.7k
https://github.com/huggingface/datasets/issues/1167
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
We're working on adding on-the-fly transforms in datasets. Currently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas. For example ```python dataset.set_format("torch") ``` applies `torch.Tensor` to the dataset entries on-the-fly. We plan to extend this to user-defined formatting transforms. For example ```python dataset.set_format(transform=tokenize) ``` What do you think ?
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern. I guess the solution would entail wrapping a dataset into a Pytorch dataset. As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html) ```python import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): # instead of doing this beforehand, I'd like to do tokenization on the fly self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) ``` How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers? ---- Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant ```python class CustomPytorchDataset(Dataset): def __init__(self): self.dataset = some_hf_dataset(...) self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") def __getitem__(self, batch_idx): instance = self.dataset[text_col][batch_idx] tokenized_text = self.tokenizer(instance, truncation=True, padding=True) return tokenized_text def __len__(self): return len(self.dataset) @staticmethod def collate_fn(batch): # batch is a list, however it will always contain 1 item because we should not use the # batch_size argument as batch_size is controlled by the sampler return {k: torch.tensor(v) for k, v in batch[0].items()} torch_ds = CustomPytorchDataset() # NOTE: batch_sampler returns list of integers and since here we have SequentialSampler # it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)` batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True) # NOTE: no `batch_size` as now the it is controlled by the sampler! dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn) ```
63
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you can opt to load and tokenize examples on the fly, rather than as a preprocessing step". I've tried coming up with a solution that would combine both `datasets` and `tokenizers`, but did not manage to find a good pattern. I guess the solution would entail wrapping a dataset into a Pytorch dataset. As a concrete example from the [docs](https://huggingface.co/transformers/custom_datasets.html) ```python import torch class SquadDataset(torch.utils.data.Dataset): def __init__(self, encodings): # instead of doing this beforehand, I'd like to do tokenization on the fly self.encodings = encodings def __getitem__(self, idx): return {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} def __len__(self): return len(self.encodings.input_ids) train_dataset = SquadDataset(train_encodings) ``` How would one implement this with "on-the-fly" tokenization exploiting the vectorized capabilities of tokenizers? ---- Edit: I have come up with this solution. It does what I want, but I feel it's not very elegant ```python class CustomPytorchDataset(Dataset): def __init__(self): self.dataset = some_hf_dataset(...) self.tokenizer = BertTokenizerFast.from_pretrained("bert-base-uncased") def __getitem__(self, batch_idx): instance = self.dataset[text_col][batch_idx] tokenized_text = self.tokenizer(instance, truncation=True, padding=True) return tokenized_text def __len__(self): return len(self.dataset) @staticmethod def collate_fn(batch): # batch is a list, however it will always contain 1 item because we should not use the # batch_size argument as batch_size is controlled by the sampler return {k: torch.tensor(v) for k, v in batch[0].items()} torch_ds = CustomPytorchDataset() # NOTE: batch_sampler returns list of integers and since here we have SequentialSampler # it returns: [1, 2, 3], [4, 5, 6], etc. - check calling `list(batch_sampler)` batch_sampler = BatchSampler(SequentialSampler(torch_ds), batch_size=3, drop_last=True) # NOTE: no `batch_size` as now the it is controlled by the sampler! dl = DataLoader(dataset=torch_ds, sampler=batch_sampler, collate_fn=torch_ds.collate_fn) ``` We're working on adding on-the-fly transforms in datasets. Currently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas. For example ```python dataset.set_format("torch") ``` applies `torch.Tensor` to the dataset entries on-the-fly. We plan to extend this to user-defined formatting transforms. For example ```python dataset.set_format(transform=tokenize) ``` What do you think ?
https://github.com/huggingface/datasets/issues/1110
Using a feature named "_type" fails with certain operations
Thanks for reporting ! Indeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json. We can probably change `_type` to something that is less likely to collide with user feature names. In this case we would want something backward compatible though. Feel free to try a fix and open a PR, and to ping me if I can help :)
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column. Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict. Best wishes and keep up the awesome work!
74
Using a feature named "_type" fails with certain operations A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persist data coming from elasticsearch to feed to our pipeline, and elasticsearch has a `_type` field, hence the strange name of the column. Not sure if you wish to support this specific column name, but if you do i would be happy to try a fix and provide a PR. I already had a look into it and i think the culprit is the `datasets.features.generate_from_dict` function. It uses the hard coded `_type` string to figure out if it reached the end of the nested feature object from a serialized dict. Best wishes and keep up the awesome work! Thanks for reporting ! Indeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json. We can probably change `_type` to something that is less likely to collide with user feature names. In this case we would want something backward compatible though. Feel free to try a fix and open a PR, and to ping me if I can help :)
https://github.com/huggingface/datasets/issues/1103
Add support to download kaggle datasets
Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?
We can use API key
17
Add support to download kaggle datasets We can use API key Hey, I think this is great idea. Any plan to integrate kaggle private datasets loading to `datasets`?
https://github.com/huggingface/datasets/issues/1103
Add support to download kaggle datasets
The workflow for downloading a Kaggle dataset and turning it into an HF dataset is pretty simple: ```python !kaggle datasets download -p path ds = load_dataset(path) ``` Native support would make our download logic even more complex, and I don't think this is a good idea considering this particular feature is not requested often. PS: Kaggle should integrate their API with `fsspec` to allow us to use a common interface if they are interested in tighter integrations
We can use API key
77
Add support to download kaggle datasets We can use API key The workflow for downloading a Kaggle dataset and turning it into an HF dataset is pretty simple: ```python !kaggle datasets download -p path ds = load_dataset(path) ``` Native support would make our download logic even more complex, and I don't think this is a good idea considering this particular feature is not requested often. PS: Kaggle should integrate their API with `fsspec` to allow us to use a common interface if they are interested in tighter integrations
https://github.com/huggingface/datasets/issues/1064
Not support links with 302 redirect
> Hi ! > This kind of links is now supported by the library since #1316 I updated links in TLC datasets to be the github links in this pull request https://github.com/huggingface/datasets/pull/1737 Everything works now. Thank you.
I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). ``` r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True) # <Response [403]> ```
37
Not support links with 302 redirect I have an issue adding this download link https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz it might be because it is not a direct link (it returns 302 and redirects to aws that returns 403 for head requests). ``` r.head("https://github.com/jitkapat/thailitcorpus/releases/download/v.2.0/tlc_v.2.0.tar.gz", allow_redirects=True) # <Response [403]> ``` > Hi ! > This kind of links is now supported by the library since #1316 I updated links in TLC datasets to be the github links in this pull request https://github.com/huggingface/datasets/pull/1737 Everything works now. Thank you.
https://github.com/huggingface/datasets/issues/1046
Dataset.map() turns tensors into lists?
A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong?
32
Dataset.map() turns tensors into lists? I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong? A solution is to have the tokenizer return a list instead of a tensor, and then use `dataset_tok.set_format(type = 'torch')` to convert that list into a tensor. Still not sure if bug.
https://github.com/huggingface/datasets/issues/1046
Dataset.map() turns tensors into lists?
It is expected behavior, you should set the format to `"torch"` as you mentioned to get pytorch tensors back. By default datasets returns pure python objects.
I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong?
26
Dataset.map() turns tensors into lists? I apply `Dataset.map()` to a function that returns a dict of torch tensors (like a tokenizer from the repo transformers). However, in the mapped dataset, these tensors have turned to lists! ```import datasets import torch from datasets import load_dataset print("version datasets", datasets.__version__) dataset = load_dataset("snli", split='train[0:50]') def tokenizer_fn(example): # actually uses a tokenizer which does something like: return {'input_ids': torch.tensor([[0, 1, 2]])} print("First item in dataset:\n", dataset[0]) tokenized = tokenizer_fn(dataset[0]) print("Tokenized hyp:\n", tokenized) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) dataset_tok = dataset.map(tokenizer_fn, batched=False, remove_columns=['label', 'premise', 'hypothesis']) print("Tokenized using map:\n", dataset_tok[0]) print(type(tokenized['input_ids']), type(dataset_tok[0]['input_ids'])) ``` The output is: ``` version datasets 1.1.3 Reusing dataset snli (/home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c) First item in dataset: {'premise': 'A person on a horse jumps over a broken down airplane.', 'hypothesis': 'A person is training his horse for a competition.', 'label': 1} Tokenized hyp: {'input_ids': tensor([[0, 1, 2]])} Loading cached processed dataset at /home/tom/.cache/huggingface/datasets/snli/plain_text/1.0.0/bb1102591c6230bd78813e229d5dd4c7fbf4fc478cec28f298761eb69e5b537c/cache-fe38f449fe9ac46f.arrow Tokenized using map: {'input_ids': [[0, 1, 2]]} <class 'torch.Tensor'> <class 'list'> ``` Or am I doing something wrong? It is expected behavior, you should set the format to `"torch"` as you mentioned to get pytorch tensors back. By default datasets returns pure python objects.
https://github.com/huggingface/datasets/issues/1004
how large datasets are handled under the hood
This library uses Apache Arrow under the hood to store datasets on disk. The advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed. For example when you access one element or one batch ```python from datasets import load_dataset squad = load_dataset("squad", split="train") first_element = squad[0] one_batch = squad[:8] ``` then only this element/batch is loaded in memory, while the rest of the dataset is memory mapped.
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
90
how large datasets are handled under the hood Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks This library uses Apache Arrow under the hood to store datasets on disk. The advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed. For example when you access one element or one batch ```python from datasets import load_dataset squad = load_dataset("squad", split="train") first_element = squad[0] one_batch = squad[:8] ``` then only this element/batch is loaded in memory, while the rest of the dataset is memory mapped.
https://github.com/huggingface/datasets/issues/1004
how large datasets are handled under the hood
How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory. EDIT: My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
68
how large datasets are handled under the hood Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory. EDIT: My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks.
https://github.com/huggingface/datasets/issues/1004
how large datasets are handled under the hood
> How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory. Loading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM. Memory mapping is almost instantaneous and is done within one process. Then, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast. But since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory. Still, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk. > EDIT: > My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks. Ok let me know if that helps !
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks
192
how large datasets are handled under the hood Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, thanks > How can we change how much data is loaded to memory with Arrow? I think that I am having some performance issue with it. When Arrow loads the data from disk it does it in multiprocess? It's almost twice slower training with arrow than in memory. Loading arrow data from disk is done with memory-mapping. This allows to load huge datasets without filling your RAM. Memory mapping is almost instantaneous and is done within one process. Then, the speed of querying examples from the dataset is I/O bounded depending on your disk. If it's an SSD then fetching examples from the dataset will be very fast. But since the I/O speed of an SSD is lower than the one of RAM it's expected to be slower to fetch data from disk than from memory. Still, if you load the dataset in different processes then it can be faster but there will still be the I/O bottleneck of the disk. > EDIT: > My fault! I had not seen the `dataloader_num_workers` in `TrainingArguments` ! Now I can parallelize and go fast! Sorry, and thanks. Ok let me know if that helps !
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
Looks like the google drive download failed. I'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file. We should consider finding a better host than google drive for this dataset imo related : #873 #864
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
40
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' Looks like the google drive download failed. I'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file. We should consider finding a better host than google drive for this dataset imo related : #873 #864
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
It is working now, thank you. Should I leave this issue open to address the Quota-exceeded error?
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
17
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' It is working now, thank you. Should I leave this issue open to address the Quota-exceeded error?
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
I've looked into it and couldn't find a solution. This looks like a Google Drive limitation.. Please try to use other hosts when possible
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
24
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' I've looked into it and couldn't find a solution. This looks like a Google Drive limitation.. Please try to use other hosts when possible
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS.
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
31
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' The original links are google drive links. Would it be feasible for HF to maintain their own servers for this? Also, I think the same issue must also exist with TFDS.
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik. Otherwise you can use the google drive link, but it it's not that convenient because of this quota issue.
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
45
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' It's possible to host data on our side but we should ask the authors. TFDS has the same issue and doesn't have a solution either afaik. Otherwise you can use the google drive link, but it it's not that convenient because of this quota issue.
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome.
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
17
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' Okay. I imagine asking every author who shares their dataset on Google Drive will also be cumbersome.
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
Not as long as the data is stored on GG drive unfortunately. Maybe we can ask if there's a mirror ? Hi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ? To give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset.
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
84
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' Not as long as the data is stored on GG drive unfortunately. Maybe we can ask if there's a mirror ? Hi @JafferWilson is there a download link to get cnn dailymail from another host than GG drive ? To give you some context, this library provides tools to download and process datasets. For CNN DailyMail the data are downloaded from the link you provide on your github repository. Unfortunately because of GG drive quotas, many users are not able to load this dataset.
https://github.com/huggingface/datasets/issues/996
NotADirectoryError while loading the CNN/Dailymail dataset
Thanks for the link @mrazizi ! Apparently the original authors don't host the dataset themselves ("for legal reasons", source [here](https://github.com/abisee/cnn-dailymail/issues/9)).
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
20
NotADirectoryError while loading the CNN/Dailymail dataset Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' Thanks for the link @mrazizi ! Apparently the original authors don't host the dataset themselves ("for legal reasons", source [here](https://github.com/abisee/cnn-dailymail/issues/9)).
https://github.com/huggingface/datasets/issues/993
Problem downloading amazon_reviews_multi
Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, "all_languages", cache_dir=".data" )` I am using version 1.1.3 of `datasets` Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json`
21
Problem downloading amazon_reviews_multi Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, "all_languages", cache_dir=".data" )` I am using version 1.1.3 of `datasets` Note that I can perform a successfull `wget https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json` Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?
https://github.com/huggingface/datasets/issues/988
making sure datasets are not loaded in memory and distributed training of them
my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316 my implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
16
making sure datasets are not loaded in memory and distributed training of them Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316 my implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks/tasks.py#L496
https://github.com/huggingface/datasets/issues/988
making sure datasets are not loaded in memory and distributed training of them
Hi! You can use the `assert not bool(dataset.cache_files)` assertion to ensure your data is in memory. And I suggest using `accelerate` for distributed training.
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks
24
making sure datasets are not loaded in memory and distributed training of them Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in case of distributed training with iterative datasets which measures needs to be taken? Is this all sharding the data only. I was wondering if there can be possibility for me to discuss this with someone with distributed training with iterative datasets using dataset library. thanks Hi! You can use the `assert not bool(dataset.cache_files)` assertion to ensure your data is in memory. And I suggest using `accelerate` for distributed training.
https://github.com/huggingface/datasets/issues/961
sample multiple datasets
here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 I need to train my model distributedly with this dataloader, "MultiTasksataloader", currently this does not work in distributed fasion, to save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
109
sample multiple datasets Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 I need to train my model distributedly with this dataloader, "MultiTasksataloader", currently this does not work in distributed fasion, to save on memory I tried to use iterative datasets, could you have a look in this dataloader and tell me if this is indeed the case? not sure how to make datasets being iterative to not load them in memory, then I remove the sampler for dataloader, and shard the data per core, could you tell me please how I should implement this case in datasets library? and how do you find my implementation in terms of correctness? thanks
https://github.com/huggingface/datasets/issues/961
sample multiple datasets
Thanks @rabeehk for sharing. The sampler basically returns a list of integers to sample from each task's dataset. I was wondering how to use it with two `torch.Dataset` of different tasks. Also, do I need to shard across processes while creating an Iterable Dataset?
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
44
sample multiple datasets Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. Thanks @rabeehk for sharing. The sampler basically returns a list of integers to sample from each task's dataset. I was wondering how to use it with two `torch.Dataset` of different tasks. Also, do I need to shard across processes while creating an Iterable Dataset?
https://github.com/huggingface/datasets/issues/961
sample multiple datasets
We now have `interleave_datasets` in the API that allows you to cycle/sample with probabilities (with various stopping strategies) through a list of datasets. However, more specific behavior should be implemented manually.
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help.
31
sample multiple datasets Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I can do it sub-questions: - I want to concat sampled datasets and define one dataloader on it, then I need a way to make sure batches come from 1 dataset in each iteration, could you assist me how I can do? - I use iterative-type of datasets, but I need a method of shuffling still since it brings accuracy performance issues if not doing it, thanks for the help. We now have `interleave_datasets` in the API that allows you to cycle/sample with probabilities (with various stopping strategies) through a list of datasets. However, more specific behavior should be implemented manually.
https://github.com/huggingface/datasets/issues/937
Local machine/cluster Beam Datasets example/tutorial
I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally. From my experience the DirectRunner is fine though, even if it's clearly not memory efficient. It would be awesome though to make it work locally on a SparkRunner ! Did you manage to make your processing work ?
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang
62
Local machine/cluster Beam Datasets example/tutorial Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get either runner correctly producing the desired output. Thanks! Shang I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally. From my experience the DirectRunner is fine though, even if it's clearly not memory efficient. It would be awesome though to make it work locally on a SparkRunner ! Did you manage to make your processing work ?
https://github.com/huggingface/datasets/issues/919
wrong length with datasets
Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs.
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks
26
wrong length with datasets Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, batch_size=self.args.train_batch_size, sampler=train_sampler, collate_fn=self.data_collator, drop_last=self.args.dataloader_drop_last, num_workers=self.args.dataloader_num_workers, ) ``` now if I type len(dataloader) this is 1, which is wrong, and this needs to be 10. could you assist me please? thanks Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs.
https://github.com/huggingface/datasets/issues/915
Shall we change the hashing to encoding to reduce potential replicated cache files?
This is an interesting idea ! Do you have ideas about how to approach the decoding and the normalization ?
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :).
20
Shall we change the hashing to encoding to reduce potential replicated cache files? Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :). This is an interesting idea ! Do you have ideas about how to approach the decoding and the normalization ?
https://github.com/huggingface/datasets/issues/915
Shall we change the hashing to encoding to reduce potential replicated cache files?
@lhoestq I think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can - decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now. - or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing. If we find one, we can adjust the list in `self._fingerprint` to it. As for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders. And for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`. Because we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset.
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :).
191
Shall we change the hashing to encoding to reduce potential replicated cache files? Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the fingerprint may help in those cases, for example, use `base64.urlsafe_b64encode`. In this way, before we want to save a new copy, we can decode the transformation chain and normalize it to prevent omit potential reuse. As the main targets of this project are the really large datasets that cannot be loaded entirely in memory, I believe it would save a lot of time if we can avoid some write. If you have interest in this, I'd love to help :). @lhoestq I think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can - decode all the current saved datasets to see if there is already one that is equivalent to the transformation we need now. - or, calculate all the possible hash value of the current chain for comparison so that we could continue to use hashing. If we find one, we can adjust the list in `self._fingerprint` to it. As for the transformation reordering rules, we can just start with some manual rules, like two sort on the same column should merge to one, filter and select can change orders. And for encoding and decoding, we can just manually specify `sort` is 0, `shuffling` is 2 and create a base-n number or use some general algorithm like `base64.urlsafe_b64encode`. Because we are not doing lazy evaluation now, we may not be able to normalize the transformation to its minimal form. If we want to support that, we can provde a `Sequential` api and let user input a list or transformation, so that user would not use the intermediate datasets. This would look like tf.data.Dataset.
https://github.com/huggingface/datasets/issues/897
Dataset viewer issues
Thanks for reporting ! cc @srush for the empty feature list issue and the encoding issue cc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
38
Dataset viewer issues I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly. Thanks for reporting ! cc @srush for the empty feature list issue and the encoding issue cc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?
https://github.com/huggingface/datasets/issues/897
Dataset viewer issues
Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time
I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
36
Dataset viewer issues I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though: - the URL is still under `nlp`, perhaps an alias for `datasets` can be made - when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user ```bash IndexError: list index out of range Traceback: File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 316, in <module> st.table(style) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta rv = marshall_element(msg.delta.new_element) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element return method(dg, element, *args, **kwargs) File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table data_frame_proto.marshall_data_frame(data, element.table) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame _marshall_styles(proto_df.style, df, styler) File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles translated_style = styler._translate() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate * (len(clabels[0]) - len(hidden_columns)) ``` - there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly. Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewer`, let me know because I'll need to change our nginx config at the same time
https://github.com/huggingface/datasets/issues/888
Nested lists are zipped unexpectedly
Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`. See the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ```
27
Nested lists are zipped unexpectedly I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ``` Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`. See the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details
https://github.com/huggingface/datasets/issues/888
Nested lists are zipped unexpectedly
Thanks. This is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :)
I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ```
28
Nested lists are zipped unexpectedly I might misunderstand something, but I expect that if I define: ```python "top": datasets.features.Sequence({ "middle": datasets.features.Sequence({ "bottom": datasets.Value("int32") }) }) ``` And I then create an example: ```python yield 1, { "top": [{ "middle": [ {"bottom": 1}, {"bottom": 2} ] }] } ``` I then load my dataset: ```python train = load_dataset("my dataset")["train"] ``` and expect to be able to access `data[0]["top"][0]["middle"][0]`. That is not the case. Here is `data[0]` as JSON: ```json {"top": {"middle": [{"bottom": [1, 2]}]}} ``` Clearly different than the thing I inputted. ```json {"top": [{"middle": [{"bottom": 1},{"bottom": 2}]}]} ``` Thanks. This is a bit (very) confusing, but I guess if its intended, I'll just work with it as if its how my data was originally structured :)
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Yes right now `ArrayXD` can only be used as a column feature type, not a subtype. With the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype="float32")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes. For now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset. What do you think ?
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
85
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Yes right now `ArrayXD` can only be used as a column feature type, not a subtype. With the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype="float32")` for example since the [underlying arrow type](https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L236) allows dynamic sizes. For now I'd suggest the use of nested `Sequence` types. Once we have the dynamic sizes you can update the dataset. What do you think ?
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
> Yes right now ArrayXD can only be used as a column feature type, not a subtype. Meaning it can't be nested under `Sequence`? If so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
45
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> > Yes right now ArrayXD can only be used as a column feature type, not a subtype. Meaning it can't be nested under `Sequence`? If so, for now I'll just make it a python list and make it with the nested `Sequence` type you suggested.
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Yea unfortunately.. That's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects. We already have an ExtensionArray that allows us to use them as column types but not for subtypes. Maybe we can extend it, I haven't experimented with that yet
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
48
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Yea unfortunately.. That's a current limitation with Arrow ExtensionTypes that can't be used in the default Arrow Array objects. We already have an ExtensionArray that allows us to use them as column types but not for subtypes. Maybe we can extend it, I haven't experimented with that yet
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Cool So please consider this issue as a feature request for: ``` Array3D(shape=(None, 137, 2), dtype="float32") ``` its a way to represent videos, poses, and other cool sequences
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
28
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Cool So please consider this issue as a feature request for: ``` Array3D(shape=(None, 137, 2), dtype="float32") ``` its a way to represent videos, poses, and other cool sequences
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
@lhoestq well, so sequence of sequences doesn't work either... ``` pyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 ```
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
23
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> @lhoestq well, so sequence of sequences doesn't work either... ``` pyarrow.lib.ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 ```
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Working with Arrow can be quite fun sometimes. You can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741). Let me know if it works. I haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
67
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Working with Arrow can be quite fun sometimes. You can fix this issue by trying to reduce the writer batch size (same trick than the one used to reduce the RAM usage in https://github.com/huggingface/datasets/issues/741). Let me know if it works. I haven't investigated yet on https://github.com/huggingface/datasets/issues/741 since I was preparing this week's sprint to add datasets but this is in my priority list for early next week.
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus) Loading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
37
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> The batch size fix doesn't work... not for #741 and not for this dataset I'm trying (DGS corpus) Loading the DGS corpus takes 400GB of RAM, which is fine with me as my machine is large enough
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
42
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Not yet, I've been pretty busy with the dataset sprint lately but this is something that's been asked several times already. So I'll definitely work on this as soon as I'm done with the sprint and with the RAM issue you reported.
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Hi @lhoestq, Any chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays? e.g.: `datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))` `Array3D(shape=(None, 137, 2), dtype="float32")`
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
29
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Hi @lhoestq, Any chance you have some updates on the supporting `ArrayXD` as a subtype or support of dynamic sized arrays? e.g.: `datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32"))` `Array3D(shape=(None, 137, 2), dtype="float32")`
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
38
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Hi ! We haven't worked in this lately and it's not in our very short-term roadmap since it requires a bit a work to make it work with arrow. Though this will definitely be added at one point.
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
@lhoestq, thanks for the update. I actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here? I think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version) Below are my modifications of this class. ``` class ArrayExtensionArray(pa.ExtensionArray): def __array__(self): zero_copy_only = _is_zero_copy_only(self.storage.type) return self.to_numpy(zero_copy_only=zero_copy_only) def __getitem__(self, i): return self.storage[i] def to_numpy(self, zero_copy_only=True): storage: pa.ListArray = self.storage size = 1 for i in range(self.type.ndims): size *= self.type.shape[i] storage = storage.flatten() numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only) numpy_arr = numpy_arr.reshape(len(self), *self.type.shape) return numpy_arr def to_list_of_numpy(self, zero_copy_only=True): storage: pa.ListArray = self.storage shape = self.type.shape arrays = [] for dim in range(1, self.type.ndims): assert shape[dim] is not None, f"Support only dynamic size on first dimension. Got: {shape}" first_dim_offsets = np.array([off.as_py() for off in storage.offsets]) for i in range(len(storage)): storage_el = storage[i:i+1] first_dim = first_dim_offsets[i+1] - first_dim_offsets[i] # flatten storage for dim in range(self.type.ndims): storage_el = storage_el.flatten() numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only) arrays.append(numpy_arr.reshape(first_dim, *shape[1:])) return arrays def to_pylist(self): zero_copy_only = _is_zero_copy_only(self.storage.type) if self.type.shape[0] is None: return self.to_list_of_numpy(zero_copy_only=zero_copy_only) else: return self.to_numpy(zero_copy_only=zero_copy_only).tolist() ``` I ran few tests and it works as expected. Let me know what you think.
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
224
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> @lhoestq, thanks for the update. I actually tried to modify some piece of code to make it work. Can you please tell if I missing anything here? I think that for vast majority of cases it's enough to make first dimension of the array dynamic i.e. `shape=(None, 100, 100)`. For that, it's enough to modify class [ArrayExtensionArray](https://github.com/huggingface/datasets/blob/9ca24250ea44e7611c4dabd01ecf9415a7f0be6c/src/datasets/features.py#L397) to output list of arrays of different sizes instead of list of arrays of same sizes (current version) Below are my modifications of this class. ``` class ArrayExtensionArray(pa.ExtensionArray): def __array__(self): zero_copy_only = _is_zero_copy_only(self.storage.type) return self.to_numpy(zero_copy_only=zero_copy_only) def __getitem__(self, i): return self.storage[i] def to_numpy(self, zero_copy_only=True): storage: pa.ListArray = self.storage size = 1 for i in range(self.type.ndims): size *= self.type.shape[i] storage = storage.flatten() numpy_arr = storage.to_numpy(zero_copy_only=zero_copy_only) numpy_arr = numpy_arr.reshape(len(self), *self.type.shape) return numpy_arr def to_list_of_numpy(self, zero_copy_only=True): storage: pa.ListArray = self.storage shape = self.type.shape arrays = [] for dim in range(1, self.type.ndims): assert shape[dim] is not None, f"Support only dynamic size on first dimension. Got: {shape}" first_dim_offsets = np.array([off.as_py() for off in storage.offsets]) for i in range(len(storage)): storage_el = storage[i:i+1] first_dim = first_dim_offsets[i+1] - first_dim_offsets[i] # flatten storage for dim in range(self.type.ndims): storage_el = storage_el.flatten() numpy_arr = storage_el.to_numpy(zero_copy_only=zero_copy_only) arrays.append(numpy_arr.reshape(first_dim, *shape[1:])) return arrays def to_pylist(self): zero_copy_only = _is_zero_copy_only(self.storage.type) if self.type.shape[0] is None: return self.to_list_of_numpy(zero_copy_only=zero_copy_only) else: return self.to_numpy(zero_copy_only=zero_copy_only).tolist() ``` I ran few tests and it works as expected. Let me know what you think.
https://github.com/huggingface/datasets/issues/887
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
Thanks for diving into this ! Indeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint). Your code looks great :) I think it can even be extended to support several dynamic dimensions if we want to. Feel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases. In particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet): ```python from datasets import Dataset, Features, Array3D # this works matrix = [[1, 0], [0, 1]] features = Features({"a": Array3D(dtype="int32", shape=(1, 2, 2))}) d = Dataset.from_dict({"a": [[matrix], [matrix]]}) print(d.to_pandas()) # this should work as well matrix = [[1, 0], [0, 1]] features = Features({"a": Array3D(dtype="int32", shape=(None, 2, 2))}) d = Dataset.from_dict({"a": [[matrix], [matrix] * 2]}) print(d.to_pandas()) ``` I'll be happy to help you on this :)
I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type>
164
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic) ```python def _info(self): return datasets.DatasetInfo( description=_DESCRIPTION, # This defines the different columns of the dataset and their types features=datasets.Features( { "pose": datasets.features.Sequence(datasets.features.Array2D(shape=(137, 2), dtype="float32")) } ), homepage=_HOMEPAGE, citation=_CITATION, ) def _generate_examples(self): """ Yields examples. """ yield 1, { "pose": [np.zeros(shape=(137, 2), dtype=np.float32)] } ``` But this doesn't work - > pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> Thanks for diving into this ! Indeed focusing on making the first dimensions dynamic make total sense (and users could still re-order their dimensions to match this constraint). Your code looks great :) I think it can even be extended to support several dynamic dimensions if we want to. Feel free to open a PR to include these changes, then we can update our test suite to make sure it works in all use cases. In particular I think we might need a few tweaks to allow it to be converted to pandas (though I haven't tested yet): ```python from datasets import Dataset, Features, Array3D # this works matrix = [[1, 0], [0, 1]] features = Features({"a": Array3D(dtype="int32", shape=(1, 2, 2))}) d = Dataset.from_dict({"a": [[matrix], [matrix]]}) print(d.to_pandas()) # this should work as well matrix = [[1, 0], [0, 1]] features = Features({"a": Array3D(dtype="int32", shape=(None, 2, 2))}) d = Dataset.from_dict({"a": [[matrix], [matrix] * 2]}) print(d.to_pandas()) ``` I'll be happy to help you on this :)
https://github.com/huggingface/datasets/issues/883
Downloading/caching only a part of a datasets' dataset.
I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger. This makes the task impossible with limited memory resources.
Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir
40
Downloading/caching only a part of a datasets' dataset. Hi, I want to use the validation data *only* (of natural question). I don't want to have the whole dataset cached in my machine, just the dev set. Is this possible? I can't find a way to do it in the docs. Thank you, Sapir I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger. This makes the task impossible with limited memory resources.
https://github.com/huggingface/datasets/issues/880
Add SQA
I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
16
Add SQA ## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq
https://github.com/huggingface/datasets/issues/880
Add SQA
@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings: ``` import pandas as pd import ast data = pd.read_csv("/content/sqa_data/random-split-1-dev.tsv", sep='\t') def _parse_answer_coordinates(answer_coordinate_str): """Parses the answer_coordinates of a question. Args: answer_coordinate_str: A string representation of a Python list of tuple strings. For example: "['(1, 4)','(1, 3)', ...]" """ try: answer_coordinates = [] # make a list of strings coords = ast.literal_eval(answer_coordinate_str) # parse each string as a tuple for row_index, column_index in sorted( ast.literal_eval(coord) for coord in coords): answer_coordinates.append((row_index, column_index)) except SyntaxError: raise ValueError('Unable to evaluate %s' % answer_coordinate_str) return answer_coordinates def _parse_answer_text(answer_text): """Populates the answer_texts field of `answer` by parsing `answer_text`. Args: answer_text: A string representation of a Python list of strings. For example: "[u'test', u'hello', ...]" """ try: answer = [] for value in ast.literal_eval(answer_text): answer.append(value) except SyntaxError: raise ValueError('Unable to evaluate %s' % answer_text) return answer data['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str)) data['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt)) ``` Here I'm using Pandas to read in one of the TSV files (the dev set).
## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html).
185
Add SQA ## Adding a Dataset - **Name:** SQA (Sequential Question Answering) by Microsoft. - **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total. - **Paper:** https://www.microsoft.com/en-us/research/publication/search-based-neural-structured-learning-sequential-question-answering/ - **Data:** https://www.microsoft.com/en-us/download/details.aspx?id=54253 - **Motivation:** currently, the [Tapas](https://ai.googleblog.com/2020/04/using-neural-networks-to-find-answers.html) algorithm by Google AI is being added to the Transformers library (see https://github.com/huggingface/transformers/pull/8113). It would be great to use that model in combination with this dataset, on which it achieves SOTA results (average question accuracy of 0.71). Note 1: this dataset actually consists of 2 types of files: 1) TSV files, containing the questions, answer coordinates and answer texts (for training, dev and test) 2) a folder of csv files, which contain the actual tabular data Note 2: if you download the dataset straight from the download link above, then you will see that the `answer_coordinates` and `answer_text` columns are string lists of string tuples and strings respectively, which is not ideal. It would be better to make them true Python lists of tuples and strings respectively (using `ast.literal_eval`), before uploading them to the HuggingFace hub. Adding this would be great! Then we could possibly also add [WTQ (WikiTable Questions)](https://github.com/ppasupat/WikiTableQuestions) and [TabFact (Tabular Fact Checking)](https://github.com/wenhuchen/Table-Fact-Checking) on which TAPAS also achieves state-of-the-art results. Note that the TAPAS algorithm requires these datasets to first be converted into the SQA format. Instructions to add a new dataset can be found [here](https://huggingface.co/docs/datasets/share_dataset.html). @thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinates` and `answer_texts` columns into true Python lists of tuples/strings: ``` import pandas as pd import ast data = pd.read_csv("/content/sqa_data/random-split-1-dev.tsv", sep='\t') def _parse_answer_coordinates(answer_coordinate_str): """Parses the answer_coordinates of a question. Args: answer_coordinate_str: A string representation of a Python list of tuple strings. For example: "['(1, 4)','(1, 3)', ...]" """ try: answer_coordinates = [] # make a list of strings coords = ast.literal_eval(answer_coordinate_str) # parse each string as a tuple for row_index, column_index in sorted( ast.literal_eval(coord) for coord in coords): answer_coordinates.append((row_index, column_index)) except SyntaxError: raise ValueError('Unable to evaluate %s' % answer_coordinate_str) return answer_coordinates def _parse_answer_text(answer_text): """Populates the answer_texts field of `answer` by parsing `answer_text`. Args: answer_text: A string representation of a Python list of strings. For example: "[u'test', u'hello', ...]" """ try: answer = [] for value in ast.literal_eval(answer_text): answer.append(value) except SyntaxError: raise ValueError('Unable to evaluate %s' % answer_text) return answer data['answer_coordinates'] = data['answer_coordinates'].apply(lambda coords_str: _parse_answer_coordinates(coords_str)) data['answer_text'] = data['answer_text'].apply(lambda txt: _parse_answer_text(txt)) ``` Here I'm using Pandas to read in one of the TSV files (the dev set).
https://github.com/huggingface/datasets/issues/879
boolq does not load
Hi ! It runs on my side without issues. I tried ```python from datasets import load_dataset load_dataset("boolq") ``` What version of datasets and tensorflow are your runnning ? Also if you manage to get a minimal reproducible script (on google colab for example) that would be useful.
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
47
boolq does not load Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False. Hi ! It runs on my side without issues. I tried ```python from datasets import load_dataset load_dataset("boolq") ``` What version of datasets and tensorflow are your runnning ? Also if you manage to get a minimal reproducible script (on google colab for example) that would be useful.
https://github.com/huggingface/datasets/issues/879
boolq does not load
hey i do the exact same commands. for me it fails i guess might be issues with caching maybe? thanks best rabeeh On Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <notifications@github.com> wrote: > Hi ! It runs on my side without issues. I tried > > from datasets import load_datasetload_dataset("boolq") > > What version of datasets and tensorflow are your runnning ? > Also if you manage to get a minimal reproducible script (on google colab > for example) that would be useful. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A> > . >
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
117
boolq does not load Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False. hey i do the exact same commands. for me it fails i guess might be issues with caching maybe? thanks best rabeeh On Tue, Nov 24, 2020, 10:24 AM Quentin Lhoest <notifications@github.com> wrote: > Hi ! It runs on my side without issues. I tried > > from datasets import load_datasetload_dataset("boolq") > > What version of datasets and tensorflow are your runnning ? > Also if you manage to get a minimal reproducible script (on google colab > for example) that would be useful. > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/datasets/issues/879#issuecomment-732769114>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABP4ZCGGDR2FUMRKZTIY5CTSRN3VXANCNFSM4T7R3U6A> > . >
https://github.com/huggingface/datasets/issues/879
boolq does not load
Could you check if it works on the master branch ? You can use `load_dataset("boolq", script_version="master")` to do so. We did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881
Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False.
43
boolq does not load Hi I am getting these errors trying to load boolq thanks Traceback (most recent call last): File "test.py", line 5, in <module> data = AutoTask().get("boolq").get_dataset("train", n_obs=10) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset dataset = self.load_dataset(split=split) File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 38, in load_dataset return datasets.load_dataset(self.task.name, split=split) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File " /idiap/home/rkarimi/.cache/huggingface/modules/datasets_modules/datasets/boolq/2987db1f15deaa19500ae24de560eabeaf1f8ef51df88c0470beeec72943bf11/boolq.py", line 74, in _split_generators downloaded_files = dl_manager.download_custom(urls_to_download, tf.io.gfile.copy) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 150, in download_custom get_from_cache(url, cache_dir=cache_dir, local_files_only=True, use_etag=False) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 472, in get_from_cache f"Cannot find the requested files in the cached path at {cache_path} and outgoing traffic has been" FileNotFoundError: Cannot find the requested files in the cached path at /idiap/home/rkarimi/.cache/huggingface/datasets/eaee069e38f6ceaa84de02ad088c34e63ec97671f2cd1910ddb16b10dc60808c and outgoing traffic has been disabled. To enable file online look-ups, set 'local_files_only' to False. Could you check if it works on the master branch ? You can use `load_dataset("boolq", script_version="master")` to do so. We did some changes recently in boolq to remove the TF dependency and we changed the way the data files are downloaded in https://github.com/huggingface/datasets/pull/881
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
> neat feature I dint get these clearly, can you please elaborate like how to work on these
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
18
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load > neat feature I dint get these clearly, can you please elaborate like how to work on these
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
18
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Thanks thomwolf and julien-c I'm still confusion on what you guys said, I have solved the problem as follows: 1. read the csv file using pandas from s3 2. Convert to dictionary key as column name and values as list column data 3. convert it to Dataset using `from datasets import Dataset` `train_dataset = Dataset.from_dict(train_dict)`
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
55
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load Thanks thomwolf and julien-c I'm still confusion on what you guys said, I have solved the problem as follows: 1. read the csv file using pandas from s3 2. Convert to dictionary key as column name and values as list column data 3. convert it to Dataset using `from datasets import Dataset` `train_dataset = Dataset.from_dict(train_dict)`
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
We were brainstorming around your use-case. Let's keep the issue open for now, I think this is an interesting question to think about.
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
23
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load We were brainstorming around your use-case. Let's keep the issue open for now, I think this is an interesting question to think about.
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
> We were brainstorming around your use-case. > > Let's keep the issue open for now, I think this is an interesting question to think about. Sure thomwolf, Thanks for your concern
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
32
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load > We were brainstorming around your use-case. > > Let's keep the issue open for now, I think this is an interesting question to think about. Sure thomwolf, Thanks for your concern
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
I agree it would be cool to have that feature. Also that's good to know that pandas supports this. For the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
45
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load I agree it would be cool to have that feature. Also that's good to know that pandas supports this. For the moment I'd suggest to first download the files locally as thom suggested and then load the dataset by providing paths to the local files
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Any updates on this issue? I face a similar issue. I have many parquet files in S3 and I would like to train on them. To be honest I even face issues with only getting the last layer embedding out of them.
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
42
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load Any updates on this issue? I face a similar issue. I have many parquet files in S3 and I would like to train on them. To be honest I even face issues with only getting the last layer embedding out of them.
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Hi dorlavie, You can find one solution that i have mentioned above, that can help you. And there is one more solution also which is downloading files locally
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
28
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load Hi dorlavie, You can find one solution that i have mentioned above, that can help you. And there is one more solution also which is downloading files locally
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
> Hi dorlavie, > You can find one solution that i have mentioned above, that can help you. > And there is one more solution also which is downloading files locally mahesh1amour, thanks for the fast reply Unfortunately, in my case I can not read with pandas. The dataset is too big (50GB). In addition, due to security concerns I am not allowed to save the data locally
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
68
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load > Hi dorlavie, > You can find one solution that i have mentioned above, that can help you. > And there is one more solution also which is downloading files locally mahesh1amour, thanks for the fast reply Unfortunately, in my case I can not read with pandas. The dataset is too big (50GB). In addition, due to security concerns I am not allowed to save the data locally
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
@dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset` boto3 example [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html) ```python import boto3 s3 = boto3.client('s3') s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME') ``` datasets example [documentation](https://huggingface.co/docs/datasets/loading_datasets.html) ```python from datasets import load_dataset dataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv']) ```
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
46
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load @dorlavie could use `boto3` to download the data to your local machine and then load it with `dataset` boto3 example [documentation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-example-download-file.html) ```python import boto3 s3 = boto3.client('s3') s3.download_file('BUCKET_NAME', 'OBJECT_NAME', 'FILE_NAME') ``` datasets example [documentation](https://huggingface.co/docs/datasets/loading_datasets.html) ```python from datasets import load_dataset dataset = load_dataset('csv', data_files=['my_file_1.csv', 'my_file_2.csv', 'my_file_3.csv']) ```
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
Thanks @philschmid for the suggestion. As I mentioned in the previous comment, due to security issues I can not save the data locally. I need to read it from S3 and process it directly. I guess that many other people try to train / fit those models on huge datasets (e.g entire Wiki), what is the best practice in those cases?
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
61
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load Thanks @philschmid for the suggestion. As I mentioned in the previous comment, due to security issues I can not save the data locally. I need to read it from S3 and process it directly. I guess that many other people try to train / fit those models on huge datasets (e.g entire Wiki), what is the best practice in those cases?
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ? Or is it the use of the `boto3` library that is not allowed in your case ?
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
37
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load If I understand correctly you're not allowed to write data on disk that you downloaded from S3 for example ? Or is it the use of the `boto3` library that is not allowed in your case ?
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
@lhoestq yes you are correct. I am not allowed to save the "raw text" locally - The "raw text" must be saved only on S3. I am allowed to save the output of any model locally. It doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
47
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load @lhoestq yes you are correct. I am not allowed to save the "raw text" locally - The "raw text" must be saved only on S3. I am allowed to save the output of any model locally. It doesn't matter how I do it boto3/pandas/pyarrow, it is forbidden
https://github.com/huggingface/datasets/issues/878
Loading Data From S3 Path in Sagemaker
@dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk **sagemaker start training job** ```python pytorch_estimator.fit({'train':'s3://my-bucket/my-training-data','eval':'s3://my-bucket/my-evaluation-data'}) ``` **in the train.py script** ```python from datasets import load_from_disk train_dataset = load_from_disk(os.environ['SM_CHANNEL_TRAIN']) ``` I have created an example of how to use transformers and datasets with sagemaker. https://github.com/philschmid/huggingface-sagemaker-example/tree/main/03_huggingface_sagemaker_trainer_with_data_from_s3 The example contains a jupyter notebook `sagemaker-example.ipynb` and an `src/` folder. The sagemaker-example is a jupyter notebook that is used to create the training job on AWS Sagemaker. The `src/` folder contains the `train.py`, our training script, and `requirements.txt` for additional dependencies.
In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load
127
Loading Data From S3 Path in Sagemaker In Sagemaker Im tring to load the data set from S3 path as follows `train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv' valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv' test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv' data_files = {} data_files["train"] = train_path data_files["validation"] = valid_path data_files["test"] = test_path extension = train_path.split(".")[-1] datasets = load_dataset(extension, data_files=data_files, s3_enabled=True) print(datasets)` I getting an error of `algo-1-7plil_1 | File "main.py", line 21, in <module> algo-1-7plil_1 | datasets = load_dataset(extension, data_files=data_files) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/load.py", line 603, in load_dataset algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 155, in __init__ algo-1-7plil_1 | **config_kwargs, algo-1-7plil_1 | File "/opt/conda/lib/python3.6/site-packages/datasets/builder.py", line 305, in _create_builder_config algo-1-7plil_1 | m.update(str(os.path.getmtime(data_file))) algo-1-7plil_1 | File "/opt/conda/lib/python3.6/genericpath.py", line 55, in getmtime algo-1-7plil_1 | return os.stat(filename).st_mtime algo-1-7plil_1 | FileNotFoundError: [Errno 2] No such file or directory: 's3://lsmv-sagemaker/pubmedbert/test.csv` But when im trying with pandas , it is able to load from S3 Does the datasets library support S3 path to load @dorlavie are you using sagemaker for training too? Then you could use S3 URI, for example `s3://my-bucket/my-training-data` and pass it within the `.fit()` function when you start the sagemaker training job. Sagemaker would then download the data from s3 into the training runtime and you could load it from disk **sagemaker start training job** ```python pytorch_estimator.fit({'train':'s3://my-bucket/my-training-data','eval':'s3://my-bucket/my-evaluation-data'}) ``` **in the train.py script** ```python from datasets import load_from_disk train_dataset = load_from_disk(os.environ['SM_CHANNEL_TRAIN']) ``` I have created an example of how to use transformers and datasets with sagemaker. https://github.com/philschmid/huggingface-sagemaker-example/tree/main/03_huggingface_sagemaker_trainer_with_data_from_s3 The example contains a jupyter notebook `sagemaker-example.ipynb` and an `src/` folder. The sagemaker-example is a jupyter notebook that is used to create the training job on AWS Sagemaker. The `src/` folder contains the `train.py`, our training script, and `requirements.txt` for additional dependencies.
https://github.com/huggingface/datasets/issues/877
DataLoader(datasets) become more and more slowly within iterations
Hi ! Thanks for reporting. Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader) It would be nice to know whether it comes from the dataloader or not
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot!
38
DataLoader(datasets) become more and more slowly within iterations Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot! Hi ! Thanks for reporting. Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader) It would be nice to know whether it comes from the dataloader or not
https://github.com/huggingface/datasets/issues/877
DataLoader(datasets) become more and more slowly within iterations
> Hi ! Thanks for reporting. > Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader) > It would be nice to know whether it comes from the dataloader or not I did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s.
Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot!
64
DataLoader(datasets) become more and more slowly within iterations Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly! ``` dataset = load_from_disk(dataset_path) # around 21,000,000 lines lineloader = tqdm(DataLoader(dataset, batch_size=1)) for idx, line in enumerate(lineloader): # do some thing for each line ``` In the begining, the loading speed is around 2000it/s, but after 1 minutes later, the speed is much slower, just around 800it/s. And when I set `num_workers=4` in DataLoader, the loading speed is much lower, just 130it/s. Could you please help me with this problem? Thanks a lot! > Hi ! Thanks for reporting. > Do you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader) > It would be nice to know whether it comes from the dataloader or not I did not iter data from raw dataset, maybe I will test later. Now I iter all files directly from `open(file)`, around 20000it/s.
https://github.com/huggingface/datasets/issues/876
imdb dataset cannot be loaded
It looks like there was an issue while building the imdb dataset. Could you provide more information about your OS and the version of python and `datasets` ? Also could you try again with ```python dataset = datasets.load_dataset("imdb", split="train", download_mode="force_redownload") ``` to make sure it's not a corrupted file issue ?
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
51
imdb dataset cannot be loaded Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ``` It looks like there was an issue while building the imdb dataset. Could you provide more information about your OS and the version of python and `datasets` ? Also could you try again with ```python dataset = datasets.load_dataset("imdb", split="train", download_mode="force_redownload") ``` to make sure it's not a corrupted file issue ?
https://github.com/huggingface/datasets/issues/876
imdb dataset cannot be loaded
Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now
Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ```
27
imdb dataset cannot be loaded Hi I am trying to load the imdb train dataset `dataset = datasets.load_dataset("imdb", split="train")` getting following errors, thanks for your help ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=32660064, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='test', num_bytes=26476338, num_examples=20316, dataset_name='imdb')}, {'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}] >>> dataset = datasets.load_dataset("imdb", split="train") ``` Hi ! I just tried in 1.8.0 and it worked fine. Can you try again ? Maybe the dataset host had some issues that are fixed now
https://github.com/huggingface/datasets/issues/873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
I see the issue happening again today - [nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Package stopwords is already up-to-date! Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' Can someone please take a look ?
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
108
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error ``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab I see the issue happening again today - [nltk_data] Downloading package stopwords to /root/nltk_data... [nltk_data] Package stopwords is already up-to-date! Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-9-cd4bf8bea840> in <module>() 22 23 ---> 24 train = load_dataset('cnn_dailymail', '3.0.0', split='train') 25 validation = load_dataset('cnn_dailymail', '3.0.0', split='validation') 26 test = load_dataset('cnn_dailymail', '3.0.0', split='test') 5 frames /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' Can someone please take a look ?
https://github.com/huggingface/datasets/issues/873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
> atal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] > > NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' > > Can someone please take a look ? 2 short-term workarounds: 1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works. 2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in #996, I was getting the "can't scan this file for viruses" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by: 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer. 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67. 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts. Either method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
199
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error ``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab > atal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] > > NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' > > Can someone please take a look ? 2 short-term workarounds: 1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works. 2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in #996, I was getting the "can't scan this file for viruses" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by: 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer. 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67. 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts. Either method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not.
https://github.com/huggingface/datasets/issues/873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
experience the same problem, ccdv/cnn_dailymail not working either. Solve this problem by installing datasets library from the master branch: python -m pip install git+https://github.com/huggingface/datasets.git@master
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
24
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error ``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab experience the same problem, ccdv/cnn_dailymail not working either. Solve this problem by installing datasets library from the master branch: python -m pip install git+https://github.com/huggingface/datasets.git@master
https://github.com/huggingface/datasets/issues/873
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error
> > atal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] > > NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' > > Can someone please take a look ? > > 2 short-term workarounds: > > 1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works. > 2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in [NotADirectoryError while loading the CNN/Dailymail dataset #996](https://github.com/huggingface/datasets/issues/996), I was getting the "can't scan this file for viruses" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by: > > 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer. > 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67. > 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts. > > Either method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not. Thankyou, editing the urls helped me than the loading dataset line.
``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab
228
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error ``` from datasets import load_dataset dataset = load_dataset('cnn_dailymail', '3.0.0') ``` Stack trace: ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-6-2e06a8332652> in <module>() 1 from datasets import load_dataset ----> 2 dataset = load_dataset('cnn_dailymail', '3.0.0') 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 608 download_config=download_config, 609 download_mode=download_mode, --> 610 ignore_verifications=ignore_verifications, 611 ) 612 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 513 if not downloaded_from_gcs: 514 self._download_and_prepare( --> 515 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 516 ) 517 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 568 split_dict = SplitDict(dataset_name=self.name) 569 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 570 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 571 572 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` I have ran the code on Google Colab > > atal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] > > NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' > > Can someone please take a look ? > > 2 short-term workarounds: > > 1. Use this line instead `dataset = load_dataset('ccdv/cnn_dailymail', '3.0.0')`. [In a related issue](https://github.com/huggingface/datasets/issues/996#issuecomment-997343101), this person mentioned another data source copy that just works. > 2. Use the same data source, but edit the urls. Instead of google drive quota problems mentioned in [NotADirectoryError while loading the CNN/Dailymail dataset #996](https://github.com/huggingface/datasets/issues/996), I was getting the "can't scan this file for viruses" problem, which results in that prompted html getting downloaded instead of the files. You can get around this by: > > 1. Look at the traceback and find out where `cnn_dailymail.py` is on your computer. > 2. Edit the `cnn_stories` and `dm_stories` url's by adding the following to the end of them `&confirm=t`. This should be around line 67. > 3. You may have to remove those confirmation html files in your download directory (`~/.cache/huggingface/datasets/downloads` for me) so that they don't get in the way of the new download attempts. > > Either method works for me. I would've made a PR, but not sure if they want to go with the new ccdv/cnn_dailymail source or not. Thankyou, editing the urls helped me than the loading dataset line.
https://github.com/huggingface/datasets/issues/871
terminate called after throwing an instance of 'google::protobuf::FatalException'
Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. Maybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
39
terminate called after throwing an instance of 'google::protobuf::FatalException' Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. Maybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)
https://github.com/huggingface/datasets/issues/871
terminate called after throwing an instance of 'google::protobuf::FatalException'
closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks
Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted
19
terminate called after throwing an instance of 'google::protobuf::FatalException' Hi I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 63/63 [02:47<00:00, 2.18s/it][libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0): terminate called after throwing an instance of 'google::protobuf::FatalException' what(): CHECK failed: (index) >= (0): run_t5_base_eval.sh: line 19: 5795 Aborted closing now, figured out this is because the max length of decoder was set smaller than the input_dimensions. thanks
https://github.com/huggingface/datasets/issues/870
[Feature Request] Add optional parameter in text loading script to preserve linebreaks
Hi ! Thanks for your message. Indeed it's a free feature we can add and that can be useful. If you want to contribute, feel free to open a PR to add it to the text dataset script :)
I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`.
39
[Feature Request] Add optional parameter in text loading script to preserve linebreaks I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data. I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great. But the first time I processed all of my data into a dataset, I hadn't realized the text loader script was processing the source files line-by-line and stripping off the newlines. Once I caught the issue, I made my own data loader by modifying one line in the default text loader (changing `batch = batch.splitlines()` to `batch = batch.splitlines(True)` inside `_generate_tables`). And so I'm all set as far as my project is concerned. But if my use case is more general, it seems like it'd be pretty trivial to add a kwarg to the default text loader called keeplinebreaks or something, which would default to False and get passed to `splitlines()`. Hi ! Thanks for your message. Indeed it's a free feature we can add and that can be useful. If you want to contribute, feel free to open a PR to add it to the text dataset script :)
https://github.com/huggingface/datasets/issues/866
OSCAR from Inria group
PR is already open here : #348 The only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled). As soon as #863 is merged we can start computing them. This will take a bit of time though
## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages.
43
OSCAR from Inria group ## Adding a Dataset - **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/). - **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.* - **Paper:** *[here](https://hal.inria.fr/hal-02148693)* - **Data:** *[here](https://oscar-corpus.com/)* - **Motivation:** *useful for unsupervised tasks in separate languages. In an ideal world, your team would be able to obtain the unshuffled version, that could be used to train GPT-2-like models (the shuffled version, I suppose, could be used for translation).* I am aware that you do offer the "colossal" Common Crawl dataset already, but this has the advantage to be available in many subcorpora for different languages. PR is already open here : #348 The only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled). As soon as #863 is merged we can start computing them. This will take a bit of time though
https://github.com/huggingface/datasets/issues/865
Have Trouble importing `datasets`
I'm sorry, this was a problem with my environment. Now that I have identified the cause of environmental dependency, I would like to fix it and try it. Excuse me for making a noise.
I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance.
34
Have Trouble importing `datasets` I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets. I cloned the newest version of datasets (master branch), and do `pip install -e .`. Then, `import datasets` causes the error below. ``` ~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in <module> 116 sys.path.append(str(HF_MODULES_CACHE)) 117 --> 118 os.makedirs(HF_MODULES_CACHE, exist_ok=True) 119 if not os.path.exists(os.path.join(HF_MODULES_CACHE, "__init__.py")): 120 with open(os.path.join(HF_MODULES_CACHE, "__init__.py"), "w"): ~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/os.py in makedirs(name, mode, exist_ok) 221 return 222 try: --> 223 mkdir(name, mode) 224 except OSError: 225 # Cannot rely on checking for EEXIST, since the operating system FileNotFoundError: [Errno 2] No such file or directory: '<MY_HOME_DIRECTORY>/.cache/huggingface/modules' ``` The error occurs in `os.makedirs` in `file_utils.py`, even though `exist_ok = True` option is set. (I use Python 3.8, so `exist_ok` is expected to work.) I've checked some environment variables, and they are set as below. ``` *** NameError: name 'HF_MODULES_CACHE' is not defined *** NameError: name 'hf_cache_home' is not defined *** NameError: name 'XDG_CACHE_HOME' is not defined ``` Should I set some environment variables before using this library? And, do you have any idea why "No such file or directory" occurs even though the `exist_ok = True` option is set? Thank you in advance. I'm sorry, this was a problem with my environment. Now that I have identified the cause of environmental dependency, I would like to fix it and try it. Excuse me for making a noise.
https://github.com/huggingface/datasets/issues/864
Unable to download cnn_dailymail dataset
Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
18
Unable to download cnn_dailymail dataset ### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions. Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2
https://github.com/huggingface/datasets/issues/864
Unable to download cnn_dailymail dataset
I couldn't reproduce unfortunately. I tried ```python from datasets import load_dataset load_dataset("cnn_dailymail", "3.0.0", download_mode="force_redownload") ``` and it worked fine on both my env (python 3.7.2) and colab (python 3.6.9) Maybe there was an issue with the google drive download link of the dataset ? Are you still having the issue ? If so could your give me more info about your python and requests version ?
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
66
Unable to download cnn_dailymail dataset ### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions. I couldn't reproduce unfortunately. I tried ```python from datasets import load_dataset load_dataset("cnn_dailymail", "3.0.0", download_mode="force_redownload") ``` and it worked fine on both my env (python 3.7.2) and colab (python 3.6.9) Maybe there was an issue with the google drive download link of the dataset ? Are you still having the issue ? If so could your give me more info about your python and requests version ?
https://github.com/huggingface/datasets/issues/864
Unable to download cnn_dailymail dataset
No, It's working fine now. Very strange. Here are my python and request versions requests 2.24.0 Python 3.8.2
### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions.
18
Unable to download cnn_dailymail dataset ### Script to reproduce the error ``` from datasets import load_dataset train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") ``` ### Error ``` --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-8-47c39c228935> in <module>() 1 from datasets import load_dataset 2 ----> 3 train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%') 4 valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]") 5 frames /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, save_infos, script_version, **config_kwargs) 609 download_config=download_config, 610 download_mode=download_mode, --> 611 ignore_verifications=ignore_verifications, 612 ) 613 /usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 469 if not downloaded_from_gcs: 470 self._download_and_prepare( --> 471 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 472 ) 473 # Sync info /usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 524 split_dict = SplitDict(dataset_name=self.name) 525 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 526 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 527 528 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _split_generators(self, dl_manager) 252 def _split_generators(self, dl_manager): 253 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 254 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 255 # Generate shared vocabulary 256 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _subset_filenames(dl_paths, split) 153 else: 154 logging.fatal("Unsupported split: %s", split) --> 155 cnn = _find_files(dl_paths, "cnn", urls) 156 dm = _find_files(dl_paths, "dm", urls) 157 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 132 else: 133 logging.fatal("Unsupported publisher: %s", publisher) --> 134 files = sorted(os.listdir(top_dir)) 135 136 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Thanks for any suggestions. No, It's working fine now. Very strange. Here are my python and request versions requests 2.24.0 Python 3.8.2
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why the tokenization takes so much space. I'm sure we can optimize that though What do you think @sgugger ?
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
58
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is why the tokenization takes so much space. I'm sure we can optimize that though What do you think @sgugger ?
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
First I think we should disable padding in the dataset processing and let the data collator do it. Then I'm wondering if you need attention_mask and token_type_ids at this point ? Finally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L280 to use more optimized integer precisions for the output. Maybe something like: - input_ids: uint16 or uint32 - token_type_ids: uint8 or bool - attention_mask: bool - special_tokens_mask: bool Also IMO these changes are all on the `transformers` side. Maybe we should discuss on the `transformers` repo
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
90
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` First I think we should disable padding in the dataset processing and let the data collator do it. Then I'm wondering if you need attention_mask and token_type_ids at this point ? Finally we can also specify the output feature types at this line https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L280 to use more optimized integer precisions for the output. Maybe something like: - input_ids: uint16 or uint32 - token_type_ids: uint8 or bool - attention_mask: bool - special_tokens_mask: bool Also IMO these changes are all on the `transformers` side. Maybe we should discuss on the `transformers` repo
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
> First I think we should disable padding in the dataset processing and let the data collator do it. No, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching. For the other optimizations, they can be done by changing the script directly for each user's use case. Not sure we can find something that is general enough to be in transformers or the examples script.
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
91
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` > First I think we should disable padding in the dataset processing and let the data collator do it. No, you can't do that on TPUs as dynamic shapes will result in a very slow training. The script can however be tweaked to use the `PaddingDataCollator` with a fixed max length instead of dynamic batching. For the other optimizations, they can be done by changing the script directly for each user's use case. Not sure we can find something that is general enough to be in transformers or the examples script.
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
Oh yes right.. Do you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
25
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` Oh yes right.. Do you think that a lazy map feature on the `datasets` side could help to avoid storing padded tokenized texts then ?
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ). If it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
55
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` I think I can do the tweak mentioned above with the data collator as short fix (but fully focused on v4 right now so that will be for later this week, beginning of next week :-) ). If it doesn't hurt performance to tokenize on the fly, that would clearly be the long-term solution however!
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
> Hey guys, > > I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. > > I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? > > I've used the following CMD: > `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` It's actually because of the parameter 'preprocessing_num_worker' when using TPU. I am also planning to have my model trained on the google TPU with a 11gb text corpus. With x8 cores enabled, each TPU core has its own dataset. When not using distributed training, the preprocessed file is about 77gb. On the opposite, if enable xla, the file produced will easily consume all my free space(more than 220gb, I think it will be, in the end, around 600gb ). So I think that's maybe where the problem came from. Is there any possibility that all of the cores share the same preprocess dataset? @sgugger @RammMaschine
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
273
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` > Hey guys, > > I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. > > I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? > > I've used the following CMD: > `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` It's actually because of the parameter 'preprocessing_num_worker' when using TPU. I am also planning to have my model trained on the google TPU with a 11gb text corpus. With x8 cores enabled, each TPU core has its own dataset. When not using distributed training, the preprocessed file is about 77gb. On the opposite, if enable xla, the file produced will easily consume all my free space(more than 220gb, I think it will be, in the end, around 600gb ). So I think that's maybe where the problem came from. Is there any possibility that all of the cores share the same preprocess dataset? @sgugger @RammMaschine
https://github.com/huggingface/datasets/issues/861
Possible Bug: Small training/dataset file creates gigantic output
Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5. Feel free to update your Datasets version ```shell pip install -U datasets ``` and see if it better suits your needs.
Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug`
34
Possible Bug: Small training/dataset file creates gigantic output Hey guys, I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB raw text file but I can't even end the preprocessing handled by datasets because this tiny 5 GB text file becomes more than 1 TB when processing. My system was running out of space and crashed prematurely. I've done training from scratch via Google's bert repo in the past and I can remember that the resulting pretraining data can become quite big. But 5 GB becoming 1 TB was never the case. Is this considered normal or is it a bug? I've used the following CMD: `python xla_spawn.py --num_cores=8 run_mlm.py --model_type bert --config_name config.json --tokenizer_name tokenizer.json --train_file dataset_full.txt --do_train --output_dir out --max_steps 500000 --save_steps 2500 --save_total_limit 2 --prediction_loss_only --line_by_line --max_seq_length 128 --pad_to_max_length --preprocessing_num_workers 16 --per_device_train_batch_size 128 --overwrite_output_dir --debug` Hi @NebelAI, we have optimized Datasets' disk usage in the latest release v1.5. Feel free to update your Datasets version ```shell pip install -U datasets ``` and see if it better suits your needs.
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
20
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error). I searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though) Should we consider replacing the old urls with these ones even though it's not the exact same data ?
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
59
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error). I searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the data are a cleaned version of the original ones though) Should we consider replacing the old urls with these ones even though it's not the exact same data ?
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
20
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz The data storage is down at the moment. Sorry. Hopefully, it will come back soon. Apologies for the inconvenience ...
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this.
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
38
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz Dear great huggingface team, this is not working yet, I really appreciate some temporary fix on this, I need this for my project and this is time sensitive and I will be grateful for your help on this.
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
20
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz We have reached out to the OPUS team which is currently working on making the data available again. Cc @jorgtied
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much.
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
22
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz Hi, this is still down, I would be really grateful if you could ping them one more time. thank you so much.
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
Hi I am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
46
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz Hi I am trying with multiple setting of wmt datasets and all failed so far, I need to have at least one dataset working for testing somecodes, and this is really time sensitive, I greatly appreciate letting me know of one translation datasets currently working. thanks
https://github.com/huggingface/datasets/issues/854
wmt16 does not download
It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.
Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz
27
wmt16 does not download Hi, I appreciate your help with the following error, thanks >>> from datasets import load_dataset >>> dataset = load_dataset("wmt16", "ro-en", split="train") Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/wmt16/ro-en/1.0.0/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/root/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://opus.nlpl.eu/download.php?f=SETIMES/v2/tmx/en-ro.tmx.gz It is still down, unfortunately. I'm sorry for that. It should come up again later today or tomorrow at the latest if no additional complications will happen.
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns. Currently to add more columns to a dataset, one must use `map`. What you can do is somehting like this: ```python # suppose you have datasets d1, d2, d3 def add_columns(example, index): example.update(d2[index]) example.update(d3[index]) return example full_dataset = d1.map(add_columns, with_indices=True) ```
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
58
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns. Currently to add more columns to a dataset, one must use `map`. What you can do is somehting like this: ```python # suppose you have datasets d1, d2, d3 def add_columns(example, index): example.update(d2[index]) example.update(d3[index]) return example full_dataset = d1.map(add_columns, with_indices=True) ```
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
That's not really difficult to add, though, no? I think it can be done without copy. Maybe let's add it to the roadmap?
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
23
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) That's not really difficult to add, though, no? I think it can be done without copy. Maybe let's add it to the roadmap?
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Actually it's doable but requires to update the `Dataset._data_files` schema to support this. I'm re-opening this since we may want to add this in the future
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
26
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Actually it's doable but requires to update the `Dataset._data_files` schema to support this. I'm re-opening this since we may want to add this in the future
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows.
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
40
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Hi @lhoestq, I would love to help and add this feature if still needed. My plan is to add an axis variable in the `concatenate_datasets` function in `arrow_dataset.py` and when that is set to 1 concatenate columns instead of rows.
https://github.com/huggingface/datasets/issues/853
concatenate_datasets support axis=0 or 1 ?
Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help ! Here is a few things about the current implementation: - A dataset object is a wrapper of one `pyarrow.Table` that contains the data - Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc. Therefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) However this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0). Therefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0. I'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.
I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png)
230
concatenate_datasets support axis=0 or 1 ? I want to achieve the following result ![image](https://user-images.githubusercontent.com/12437751/99207426-f0c8db80-27f8-11eb-820a-4d9f7287b742.png) Hi ! I would love to see this feature implemented as well :) Thank you for proposing your help ! Here is a few things about the current implementation: - A dataset object is a wrapper of one `pyarrow.Table` that contains the data - Pyarrow offers an API that allows to transform Table objects. For example there are functions like `concat_tables`, `Table.rename_columns`, `Table.add_column` etc. Therefore adding columns from another dataset is possible thanks to the pyarrow API and in particular `Table.add_column` :) However this breaks some features we have regarding pickle. A dataset object can be pickled and unpickled without loading all the data in memory. It is useful for multiprocessing for example. Pickling a dataset object is possible thanks to the `Dataset._data_files` which defines the list of arrow files that will be used to form the final Table (basically all the data from each files are concatenated on axis 0). Therefore to be able to add columns to a Dataset and still be able to work with it in a multiprocessing setup, we need to extend this last aspect to be able to reconstruct a Table object from multiple arrow files that are combined in both axis 0 and 1. Currently this reconstruction mechanism only supports axis 0. I'm sure we can figure something out that enables users to add columns from another dataset while keeping the multiprocessing support.