DistributedRunner

class optimum.habana.distributed.DistributedRunner

< >

( command_list = [] world_size = 1 use_mpi = False use_deepspeed = False use_env = False map_by = 'socket' multi_hls = False )

Set up training hardware configurations and run distributed training commands.

create_multi_hls_setup

< >

( )

Multi-node configuration setup for mpirun.

create_single_card_setup

< >

( )

Single-card setup.

create_single_hls_setup

< >

( )

Single-node multi-card configuration setup.

create_single_hls_setup_deepspeed

< >

( )

Single-node multi-card configuration setup for DeepSpeed.

create_single_hls_setup_mpirun

< >

( )

Single-node multi-card configuration setup for mpirun.