Launch your training function inside a notebook. Currently supports launching a training with TPUs on [Google Colab](https://colab.research.google.com/) and [Kaggle kernels](https://www.kaggle.com/code), as well as training on several GPUs (if the machine on which you are running your notebook has them).
An example can be found in this notebook.
Accelerator object should only be defined inside the training function. This is because the
initialization should be done inside the launcher only.
notebook_launcher(function, args=(), num_processes=None, use_fp16=False, use_port='29500')¶
Launches a training function, using several processes if it’s possible in the current environment (TPU with multiple cores for instance).
Callable) – The training function to execute. If it accepts arguments, the first argument should be the index of the process run.
Tuple) – Tuple of arguments to pass to the function (it will receive
int, optional) – The number of processes to use for training. Will default to 8 in Colab/Kaggle if a TPU is available, to the number of GPUs available otherwise.
bool, optional, defaults to
False) – If
True, will use mixed precision training on multi-GPU.
str, optional, defaults to
"29500") – The port to use to communicate between processes when launching a multi-GPU training.