You are viewing v1.21.4 version.
A newer version
v1.23.3 is available.
Distributed training with Optimum Habana
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude.
All the PyTorch examples and the GaudiTrainer
script work out of the box with distributed training.
There are two ways of launching them:
- Using the gaudi_spawn.py script:
python gaudi_spawn.py \ --world_size number_of_hpu_you_have --use_mpi \ path_to_script.py --args1 --args2 ... --argsN
where --argX
is an argument of the script to run in a distributed way.
Examples are given for question answering here and text classification here.
- Using the
DistributedRunner
directly in code:
from optimum.habana.distributed import DistributedRunner
from optimum.utils import logging
world_size=8 # Number of HPUs to use (1 or 8)
# define distributed runner
distributed_runner = DistributedRunner(
command_list=["scripts/train.py --args1 --args2 ... --argsN"],
world_size=world_size,
use_mpi=True,
)
# start job
ret_code = distributed_runner.run()
You can set the training argument --distribution_strategy fast_ddp
for simpler and usually faster distributed training management. More information here.
To go further, we invite you to read our guides about:
- Accelerating training
- Pretraining
- DeepSpeed to train bigger models
- Multi-node training to speed up even more your distributed runs