Transformers documentation
Apple Silicon
Apple Silicon
Apple Silicon (M series) features a unified memory architecture, making it possible to efficiently train large models locally and improves performance by reducing latency associated with data retrieval. You can take advantage of Apple Silicon for training with PyTorch due to its integration with Metal Performance Shaders (MPS).
The mps
backend requires macOS 12.3 or later.
Some PyTorch operations are not implemented in MPS yet. To avoid an error, set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1
to fallback on the CPU kernels. Please open an issue in the PyTorch repository if you encounter any other issues.
TrainingArguments and Trainer detects and sets the backend device to mps
if an Apple Silicon device is available. No additional changes are required to enable training on your device.
The mps
backend doesn’t support distributed training.
Resources
Learn more about the MPS backend in the Introducing Accelerated PyTorch Training on Mac blog post.
< > Update on GitHub