Internals¶

Optimizer¶

DataLoader¶

The main work on your PyTorch DataLoader is done by the following function:

BatchSamplerShard¶

BatchSamplerShard¶

IterableDatasetShard¶

Distributed Config¶

AcceleratorState¶

class accelerate.state.AcceleratorState(fp16: bool = None, cpu: bool = False, _from_accelerator: bool = False)[source]¶

This is a variation of a singleton class in the sense that all instance of AcceleratorState share the same state, which is initialized on the first instantiation.

Attributes

  • device (torch.device) – The device to use.

  • distributed_type (DistributedType) – The type of distributed environment currently in use.

  • num_processes (int) – The number of processes currently launched in parallel.

  • process_index (int) – The index of the current process.

  • local_process_index (int) – The index of the current process on the current server.

  • use_fp16 (bool) – Whether or not the current script will use mixed precision.

DistributedType¶

class accelerate.state.DistributedType(value)[source]¶

Represents a type of distributed environment.

Values:

  • NO – Not a distributed environment, just a single process.

  • MULTI_GPU – Distributed on multiple GPUs.

  • TPU – Distributed on TPUs.

Utilities¶