🤗 Optimum

🤗 Optimum is an extension of 🤗 Transformers that provides a set of performance optimization tools to train and run models on targeted hardware with maximum efficiency.

The AI ecosystem evolves quickly, and more and more specialized hardware along with their own optimizations are emerging every day. As such, 🤗 Optimum enables developers to efficiently use any of these platforms with the same ease inherent to 🤗 Transformers.

🤗 Optimum is distributed as a collection of packages - check out the links below for an in-depth look at each one.

Optimum Graphcore

Train transformers on Graphcore IPUs, a completely new kind of massively parallel processor to accelerate machine intelligence.

Optimum Habana

Maximize training throughput and efficiency with Habana's Gaudi processor.

Optimum Intel

Use Intel's Neural Compressor and OpenVINO frameworks to accelerate transformer inference.

ONNX Runtime

Apply quantization and graph optimization to accelerate transformer training and inference with ONNX Runtime

Torch FX

Create and compose custom graph transformations to optimize PyTorch transformer models with Torch FX