html_url
stringlengths
51
51
title
stringlengths
6
280
comments
stringlengths
67
24.7k
body
stringlengths
51
36.2k
comment_length
int64
16
1.45k
text
stringlengths
159
38.3k
https://github.com/huggingface/datasets/issues/2300
Add VoxPopuli
Hey @jfainberg, This sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do: ```python dataset = load_dataset("voxpopuli", "french") ``` => so as a start I think your option 2 is the way to go!
## Adding a Dataset - **Name:** Voxpopuli - **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings - **Paper:** https://arxiv.org/abs/2101.00390 - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** biggest unlabeled speech dataset **Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
54
Add VoxPopuli ## Adding a Dataset - **Name:** Voxpopuli - **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings - **Paper:** https://arxiv.org/abs/2101.00390 - **Data:** https://github.com/facebookresearch/voxpopuli - **Motivation:** biggest unlabeled speech dataset **Note**: Since the dataset is so huge, we should only add the config `10k` in the beginning. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Hey @jfainberg, This sounds great! I think adding a dependency would not be a big problem, however automatically segmenting the data probably means that it would take a very long time to do: ```python dataset = load_dataset("voxpopuli", "french") ``` => so as a start I think your option 2 is the way to go!
https://github.com/huggingface/datasets/issues/2294
Slow #0 when using map to tokenize.
Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ? There are no difference between process 0 and the others except that it processes the first shard of the dataset.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
39
Slow #0 when using map to tokenize. Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it? Hi ! Have you tried other values for `preprocessing_num_workers` ? Is it always process 0 that is slower ? There are no difference between process 0 and the others except that it processes the first shard of the dataset.
https://github.com/huggingface/datasets/issues/2294
Slow #0 when using map to tokenize.
Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this: ```if args.dataset_name1 is not None: dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split="train") dataset1 = dataset1.remove_columns('title') if args.dataset_name2 is not None: dataset2 = load_dataset(args.dataset_name2, args.dataset_config_name2,split="train") assert dataset1.features.type == dataset2.features.type, str(dataset1.features.type)+';'+str(dataset2.features.type) datasets12 = concatenate_datasets([dataset1, dataset2], split='train') ``` When I just use one datasets, e.g. wikipedia, the problem seems no longer exist: ![image](https://user-images.githubusercontent.com/31714566/116967059-13d24380-ace4-11eb-8d14-b7b9c9a275cc.png) Bookcorpus has more row numbers than Wikipedia, however, it takes much more time to process each batch of wiki than that of bookcorpus. When we first concatenate two datasets and then use _map_ to process the concatenated datasets, e.g. `num_proc=5`, process 0 has to process all of the wikipedia data, causing the problem that #0 takes a longer time to finish the job. The problem is caused by the different characteristic of different datasets. One solution might be using _map_ first to process two datasets seperately, then concatenate the tokenized and processed datasets before input to the `Dataloader`.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
172
Slow #0 when using map to tokenize. Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it? Hi, I have found the reason of it. Before using the map function to tokenize the data, I concatenate the wikipedia and bookcorpus first, like this: ```if args.dataset_name1 is not None: dataset1 = load_dataset(args.dataset_name1, args.dataset_config_name1, split="train") dataset1 = dataset1.remove_columns('title') if args.dataset_name2 is not None: dataset2 = load_dataset(args.dataset_name2, args.dataset_config_name2,split="train") assert dataset1.features.type == dataset2.features.type, str(dataset1.features.type)+';'+str(dataset2.features.type) datasets12 = concatenate_datasets([dataset1, dataset2], split='train') ``` When I just use one datasets, e.g. wikipedia, the problem seems no longer exist: ![image](https://user-images.githubusercontent.com/31714566/116967059-13d24380-ace4-11eb-8d14-b7b9c9a275cc.png) Bookcorpus has more row numbers than Wikipedia, however, it takes much more time to process each batch of wiki than that of bookcorpus. When we first concatenate two datasets and then use _map_ to process the concatenated datasets, e.g. `num_proc=5`, process 0 has to process all of the wikipedia data, causing the problem that #0 takes a longer time to finish the job. The problem is caused by the different characteristic of different datasets. One solution might be using _map_ first to process two datasets seperately, then concatenate the tokenized and processed datasets before input to the `Dataloader`.
https://github.com/huggingface/datasets/issues/2294
Slow #0 when using map to tokenize.
That makes sense ! You can indeed use `map` on both datasets separately and then concatenate. Another option is to concatenate, then shuffle, and then `map`.
Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it?
26
Slow #0 when using map to tokenize. Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map( tokenize_function, batched=True, num_proc=args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not args.overwrite_cache, )` to tokenize by multiprocessing. However, I have found that when `num_proc`>1,the process _#0_ is much slower than others. It looks like this: ![image](https://user-images.githubusercontent.com/31714566/116665555-81246280-a9cc-11eb-8a37-6e608ab310d0.png) It takes more than 12 hours for #0, while others just about half an hour. Could anyone tell me it is normal or not, and is there any methods to speed up it? That makes sense ! You can indeed use `map` on both datasets separately and then concatenate. Another option is to concatenate, then shuffle, and then `map`.
https://github.com/huggingface/datasets/issues/2288
Load_dataset for local CSV files
Hi, this is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data): ```python import ast # load the dataset and copy the features def process(ex): return {"tokens": ast.literal_eval(ex["tokens"]), "labels": ast.literal_eval(ex["labels"])} dataset = dataset.map(process, features=new_features) ```
The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ?
72
Load_dataset for local CSV files The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ? Hi, this is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data): ```python import ast # load the dataset and copy the features def process(ex): return {"tokens": ast.literal_eval(ex["tokens"]), "labels": ast.literal_eval(ex["labels"])} dataset = dataset.map(process, features=new_features) ```
https://github.com/huggingface/datasets/issues/2288
Load_dataset for local CSV files
Hi, Thanks for the reply. I have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error: ``` ArrowInvalid: Could not convert X with type str: tried to convert to int ``` Why this happens ? Should labels be mapped to their ids and use int instead of str ?
The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ?
55
Load_dataset for local CSV files The method load_dataset fails to correctly load a dataset from csv. Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings. row example: ```tokens | labels ['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ] ``` The method, loads each list as a string: (i.g "['I' , 'am', 'John']"). To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type ``` new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None)) new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags))) dataset = dataset.cast(new_features) ``` but I got the following error ``` ArrowNotImplementedError: Unsupported cast from string to list using function cast_list ``` Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well. How can this be solved ? Hi, Thanks for the reply. I have already used ```ast.literal_eval``` to evaluate the string into list, but I was getting another error: ``` ArrowInvalid: Could not convert X with type str: tried to convert to int ``` Why this happens ? Should labels be mapped to their ids and use int instead of str ?
https://github.com/huggingface/datasets/issues/2285
Help understanding how to build a dataset for language modeling as with the old TextDataset
I received an answer for this question on the HuggingFace Datasets forum by @lhoestq Hi ! If you want to tokenize line by line, you can use this: ``` max_seq_length = 512 num_proc = 4 def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer( examples["text"], truncation=True, max_length=max_seq_length, ) tokenized_dataset = dataset.map( tokenize_function, batched=True, num_proc=num_proc, remove_columns=["text"], ) ``` Though the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization: ``` # Main data processing function that will concatenate all texts from # our dataset and generate chunks of max_seq_length. def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, # you can customize this part to your needs. total_length = (total_length // max_seq_length) * max_seq_length # Split by chunks of max_len. result = { k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)] for k, t in concatenated_examples.items() } return result # Note that with `batched=True`, this map processes 1,000 texts together, # so group_texts throws away a remainder for each of those groups of 1,000 texts. # You can adjust that batch_size here but a higher value might be slower to preprocess. tokenized_dataset = tokenized_dataset.map( group_texts, batched=True, num_proc=num_proc, ) ``` This code comes from the processing of the run_mlm.py example script of transformers
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
270
Help understanding how to build a dataset for language modeling as with the old TextDataset Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :)) I received an answer for this question on the HuggingFace Datasets forum by @lhoestq Hi ! If you want to tokenize line by line, you can use this: ``` max_seq_length = 512 num_proc = 4 def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer( examples["text"], truncation=True, max_length=max_seq_length, ) tokenized_dataset = dataset.map( tokenize_function, batched=True, num_proc=num_proc, remove_columns=["text"], ) ``` Though the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization: ``` # Main data processing function that will concatenate all texts from # our dataset and generate chunks of max_seq_length. def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, # you can customize this part to your needs. total_length = (total_length // max_seq_length) * max_seq_length # Split by chunks of max_len. result = { k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)] for k, t in concatenated_examples.items() } return result # Note that with `batched=True`, this map processes 1,000 texts together, # so group_texts throws away a remainder for each of those groups of 1,000 texts. # You can adjust that batch_size here but a higher value might be slower to preprocess. tokenized_dataset = tokenized_dataset.map( group_texts, batched=True, num_proc=num_proc, ) ``` This code comes from the processing of the run_mlm.py example script of transformers