Skip split generation.

#23
by luosuu - opened

Hi,

I would like to skip the split generation for my case since I only use the dataset for training and no validation is needed. When I run:

raw_datasets = load_dataset(
            "togethercomputer/RedPajama-Data-1T",
            "default",
            cache_dir=cache_dir,
            token=None,
            streaming=False,
        )

It will tell me:

08/16/2023 15:28:57 - INFO - datasets.builder - Generating train split
Generating train split: 1173453 examples [13:48, 1303.80 examples/s]

which is slow. I also tried to pass an additional argument split="train" which does not help too. It will still run the split generation process.

Thank you

Together org

Hi @luosuu

There are no splits other than the train split in the dataset -- you can see this from this snippet. Loading the dataset takes long because it is roughly 3TB of data to download.

If you want to speed up downloading and have access to multiple nodes, you can also load the different dataset subsets in parallel (for that you can specifc the name argument to be one of arxiv, book, c4, common_crawl, github, stackexchange, wikipedia).

I hope this helps!

Hi @mauriceweber

Thank you for your reply. However, I have downloaded the entire dataset on my local filesystem. The split generation process still exists even if I have specified the subset.

for example, if I run the official example from huggingface

python run_mlm.py \
        --model_name_or_path roberta-base \
        --dataset_name togethercomputer/RedPajama-Data-1T \
        --dataset_config_name arxiv \
        --per_device_train_batch_size 8 \
        --preprocessing_num_workers 20 \
        --validation_split_percentage 0 \
        --line_by_line \
        --do_train 

it still show off

08/23/2023 15:51:44 - INFO - datasets.builder - Generating train split
Generating train split: 484885 examples [05:42, 1822.24 examples/s]

Thank you

I have the same question. Did you solve it?

Sign up or log in to comment