Optimization

🤗 Optimum provides an optimum.onnxruntime package that enables you to apply graph optimization on many model hosted on the 🤗 hub using the ONNX Runtime model optimization tool.

Creating an ORTOptimizer

The ORTOptimizer class is used to optimize your ONNX model. The class can be initialized using the from_pretrained() method, which supports different checkpoint formats.

  1. Using an already initialized ORTModelForXXX class.
>>> from optimum.onnxruntime import ORTOptimizer, ORTModelForSequenceClassification

# Loading ONNX Model from the Hub
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")

# Create an optimizer from an ORTModelForXXX
>>> optimizer = ORTOptimizer.from_pretrained(model)
  1. Using a local ONNX model from a directory.
>>> from optimum.onnxruntime import ORTOptimizer

# This assumes a model.onnx exists in path/to/model
>>> optimizer = ORTOptimizer.from_pretrained("path/to/model")

Optimization Configuration

The OptimizationConfig class allows to specify how the optimization should be performed by the ORTOptimizer.

In the optimization configuration, there are 4 possible optimization levels:

Choosing a level enables the optimizations of that level, as well as the optimizations of all preceding levels. More information here.

enable_transformers_specific_optimizations=True means that transformers-specific graph fusion and approximation are performed in addition to the ONNX Runtime optimizations described above. Here is a list of the possible optimizations you can enable:

While OptimizationConfig gives you full control on how to do optimization, it can be hard to know what to enable / disable. Instead, you can use AutoOptimizationConfig which provides 3 common optimizations levels:

Example: Loading a O2 OptimizationConfig

>>> from optimum.onnxruntime import AutoOptimizationConfig
>>> optimization_config = AutoOptimizationConfig.O2()

You can also specify custom argument that were not defined in the O2 configuration, for instance:

>>> from optimum.onnxruntime import AutoOptimizationConfig
>>> optimization_config = AutoOptimizationConfig.O2(disable_embed_layer_norm_fusion=False)

Optimization examples

Below you will find an easy end-to-end example on how to optimize distilbert-base-uncased-finetuned-sst-2-english.

>>> from optimum.onnxruntime import ORTOptimizer, ORTModelForSequenceClassification, OptimizationConfig

>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_dir = "/tmp/outputs"

# Load a PyTorch model and export it to the ONNX format
>>> model = ORTModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)

# Create the optimizer
>>> optimizer = ORTOptimizer.from_pretrained(model)

# Define the optimization strategy by creating the appropriate configuration
>>> optimization_config = OptimizationConfig(
    optimization_level=2,
    enable_transformers_specific_optimizations=True,
    optimize_for_gpu=False,
)

# Optimize the model
>>> optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)

Below you will find an easy end-to-end example on how to optimize a Seq2Seq model sshleifer/distilbart-cnn-12-6”.

>>> from optimum.onnxruntime import ORTOptimizer, ORTModelForSeq2SeqLM, OptimizationConfig
>>> from transformers import AutoTokenizer

>>> model_id = "sshleifer/distilbart-cnn-12-6"
>>> save_dir = "/tmp/outputs"

# Load a PyTorch model and export it to the ONNX format
>>> model = ORTModelForSeq2SeqLM.from_pretrained(model_id, from_transformers=True)

# Create the optimizer
>>> optimizer = ORTOptimizer.from_pretrained(model)

# Define the optimization strategy by creating the appropriate configuration
>>> optimization_config = OptimizationConfig(
    optimization_level=2,
    enable_transformers_specific_optimizations=True,
    optimize_for_gpu=False,
)

# Optimize the model
>>> optimizer.optimize(save_dir=save_dir, optimization_config=optimization_config)

# Load the resulting optimized model
>>> optimized_model = ORTModelForSeq2SeqLM.from_pretrained(
    save_dir,
    encoder_file_name="encoder_model_optimized.onnx",
    decoder_file_name="decoder_model_optimized.onnx",
    decoder_file_with_past_name="decoder_with_past_model_optimized.onnx",
)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> tokens = tokenizer("This is a sample input", return_tensors="pt")
>>> outputs = optimized_model.generate(**tokens)