The main work on your PyTorch
DataLoader is done by the following function:
AcceleratorState(fp16: bool = None, cpu: bool = False, _from_accelerator: bool = False)¶
torch.device) – The device to use.
DistributedType) – The type of distributed environment currently in use.
int) – The number of processes currently launched in parallel.
int) – The index of the current process.
int) – The index of the current process on the current server.
bool) – Whether or not the current script will use mixed precision.