Quickstart
At its core, 🤗 Optimum uses configuration objects to define parameters for optimization on different accelerators. These objects are then used to instantiate dedicated optimizers, quantizers, and pruners.
Before applying quantization or optimization, we first need to export our model to the ONNX format.
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer
>>> model_checkpoint = "distilbert-base-uncased-finetuned-sst-2-english"
>>> save_directory = "tmp/onnx/"
>>> # Load a model from transformers and export it to ONNX
>>> ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
>>> # Save the onnx model and tokenizer
>>> ort_model.save_pretrained(save_directory)
>>> tokenizer.save_pretrained(save_directory)
Let’s see now how we can apply dynamic quantization with ONNX Runtime:
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig
>>> from optimum.onnxruntime import ORTQuantizer
>>> # Define the quantization methodology
>>> qconfig = AutoQuantizationConfig.arm64(is_static=False, per_channel=False)
>>> quantizer = ORTQuantizer.from_pretrained(ort_model)
>>> # Apply dynamic quantization on the model
>>> quantizer.quantize(save_dir=save_directory, quantization_config=qconfig)
In this example, we’ve quantized a model from the Hugging Face Hub, but it could also be a path to a local model directory. The result from applying the quantize()
method is a model_quantized.onnx
file that can be used to run inference. Here’s an example of how to load an ONNX Runtime model and generate predictions with it:
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import pipeline, AutoTokenizer
>>> model = ORTModelForSequenceClassification.from_pretrained(save_directory, file_name="model_quantized.onnx")
>>> tokenizer = AutoTokenizer.from_pretrained(save_directory)
>>> cls_pipeline = pipeline("text-classification", model=model, tokenizer=tokenizer)
>>> results = cls_pipeline("I love burritos!")
Similarly, you can apply static quantization by simply setting is_static
to True
when instantiating the QuantizationConfig
object:
>>> qconfig = AutoQuantizationConfig.arm64(is_static=True, per_channel=False)
Static quantization relies on feeding batches of data through the model to estimate the activation quantization parameters ahead of inference time. To support this, 🤗 Optimum allows you to provide a calibration dataset. The calibration dataset can be a simple Dataset
object from the 🤗 Datasets library, or any dataset that’s hosted on the Hugging Face Hub. For this example, we’ll pick the sst2
dataset that the model was originally trained on:
>>> from functools import partial
>>> from optimum.onnxruntime.configuration import AutoCalibrationConfig
# Define the processing function to apply to each example after loading the dataset
>>> def preprocess_fn(ex, tokenizer):
... return tokenizer(ex["sentence"])
>>> # Create the calibration dataset
>>> calibration_dataset = quantizer.get_calibration_dataset(
... "glue",
... dataset_config_name="sst2",
... preprocess_function=partial(preprocess_fn, tokenizer=tokenizer),
... num_samples=50,
... dataset_split="train",
... )
>>> # Create the calibration configuration containing the parameters related to calibration.
>>> calibration_config = AutoCalibrationConfig.minmax(calibration_dataset)
>>> # Perform the calibration step: computes the activations quantization ranges
>>> ranges = quantizer.fit(
... dataset=calibration_dataset,
... calibration_config=calibration_config,
... operators_to_quantize=qconfig.operators_to_quantize,
... )
>>> # Apply static quantization on the model
>>> quantizer.quantize(
... save_dir=save_directory,
... calibration_tensors_range=ranges,
... quantization_config=qconfig,
... )
As a final example, let’s take a look at applying graph optimizations techniques such as operator fusion and constant folding. As before, we load a configuration object, but this time by setting the optimization level instead of the quantization approach:
>>> from optimum.onnxruntime.configuration import OptimizationConfig
>>> # Here the optimization level is selected to be 1, enabling basic optimizations such as redundant node eliminations and constant folding. Higher optimization level will result in a hardware dependent optimized graph.
>>> optimization_config = OptimizationConfig(optimization_level=1)
Next, we load an optimizer to apply these optimisations to our model:
>>> from optimum.onnxruntime import ORTOptimizer
>>> optimizer = ORTOptimizer.from_pretrained(ort_model)
>>> # Optimize the model
>>> optimizer.optimize(save_dir=save_directory, optimization_config=optimization_config)
And that’s it - the model is now optimized and ready for inference! As you can see, the process is similar in each case:
- Define the optimization / quantization strategies via an
OptimizationConfig
/QuantizationConfig
object - Instantiate a
ORTQuantizer
orORTOptimizer
class - Apply the
quantize()
oroptimize()
method - Run inference
Check out the examples
directory for more sophisticated usage.
Happy optimising 🤗!