How to finetune with multiple GPUs

#13
by nlpdev3 - opened

How to finetune with multiple GPUs?

NLP Group of The University of Hong Kong org

Hi, Thanks a lot for your interests in the INSTRUCTOR!

The following script should use all the available GPUs to finetune models:

python train.py --model_name_or_path sentence-transformers/gtr-t5-large --output_dir {output_directory} --cache_dir {cache_directory} --max_source_length 512 --num_train_epochs 10 --save_steps 500 --cl_temperature 0.01 --warmup_ratio 0.1 --learning_rate 2e-5 --overwrite_output_dir

You may also specify the GPUs by using CUDA_VISIBLE_DEVICE=GPU_ids.

For more details, you may refer to training instructions

nlpdev3 changed discussion status to closed

Sign up or log in to comment