Optimum documentation

Distributed training with Optimum Habana

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Distributed training with Optimum Habana

As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude.

All the PyTorch examples and the GaudiTrainer script work out of the box with distributed training. There are two ways of launching them:

  1. Using the gaudi_spawn.py script:
python gaudi_spawn.py \
    --world_size number_of_hpu_you_have --use_mpi \
    path_to_script.py --args1 --args2 ... --argsN

where --argX is an argument of the script to run in a distributed way. Examples are given for question answering here and for text classification here.

  1. Using the DistributedRunner directly in code:
from optimum.habana.distributed import DistributedRunner
from optimum.utils import logging

world_size=8 # Number of HPUs to use (1 or 8)

# define distributed runner
distributed_runner = DistributedRunner(
    command_list=["scripts/train.py --args1 --args2 ... --argsN"],

# start job
ret_code = distributed_runner.run()