Overview
π€ Optimum provides an API called BetterTransformer, a fast path of standard PyTorch Transformer APIs to benefit from interesting speedups on CPU & GPU through sparsity and fused kernels as Flash Attention. For now, BetterTransformer supports the fastpath from the native nn.TransformerEncoderLayer
as well as Flash Attention and Memory-Efficient Attention from torch.nn.functional.scaled_dot_product_attention
.
Quickstart
Since its 1.13 version, PyTorch released the stable version of a fast path for its standard Transformer APIs that provides out of the box performance improvements for transformer-based models. You can benefit from interesting speedup on most consumer-type devices, including CPUs, older and newer versions of NIVIDIA GPUs. You can now use this feature in π€ Optimum together with Transformers and use it for major models in the Hugging Face ecosystem.
In the 2.0 version, PyTorch includes a native scaled dot-product attention operator (SDPA) as part of torch.nn.functional
. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the official documentation for more information, and this blog post for benchmarks.
We provide an integration with these optimizations out of the box in π€ Optimum, so that you can convert any supported π€ Transformers model so as to use the optimized paths & scaled_dot_product_attention
function when relevant.
Thus, by default in training mode, the BetterTransformer integration drops the mask support and can only be used for training that do not require a padding mask for batched training. This is the case for example for masked language modeling or causal language modeling. BetterTransformer is not suited for the fine-tuning of models on tasks that requires a padding mask.
In inference mode, the padding mask is kept for correctness and thus speedups should be expected only in the batch size = 1 case.
Supported models
The list of supported model below:
- AlBERT
- Bark
- BART
- BERT
- BERT-generation
- BLIP-2
- BLOOM
- CamemBERT
- CLIP
- CodeGen
- Data2VecText
- DistilBert
- DeiT
- Electra
- Ernie
- Falcon (No need to use BetterTransformer, it is directy supported by Transformers)
- FSMT
- GPT2
- GPT-j
- GPT-neo
- GPT-neo-x
- GPT BigCode (SantaCoder, StarCoder - no need to use BetterTransformer, it is directy supported by Transformers)
- HuBERT
- LayoutLM
- Llama & Llama2 (No need to use BetterTransformer, it is directy supported by Transformers)
- MarkupLM
- Marian
- MBart
- M2M100
- OPT
- ProphetNet
- RemBERT
- RoBERTa
- RoCBert
- RoFormer
- Splinter
- Tapas
- ViLT
- ViT
- ViT-MAE
- ViT-MSN
- Wav2Vec2
- Whisper (No need to use BetterTransformer, it is directy supported by Transformers)
- XLMRoberta
- YOLOS
Let us know by opening an issue in π€ Optimum if you want more models to be supported, or check out the contribution guideline if you want to add it by yourself!
Quick usage
In order to use the BetterTransformer
API just run the following commands:
>>> from transformers import AutoModelForSequenceClassification
>>> from optimum.bettertransformer import BetterTransformer
>>> model_hf = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> model = BetterTransformer.transform(model_hf, keep_original_model=True)
You can leave keep_original_model=False
in case you want to overwrite the current model with its BetterTransformer
version.
More details on tutorials
section to deeply understand how to use it, or check the Google colab demo!