🤗 Optimum provides an optimum.onnxruntime
package that enables you to apply quantization on many model hosted on the 🤗 hub using the ONNX Runtime quantization tool.
ORTQuantizer
The ORTQuantizer
class is used to quantize your ONNX model. The class can be initialized using the from_pretrained()
method, which supports different checkpoint formats.
ORTModelForXXX
class.>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification
# Loading ONNX Model from the Hub
>>> ort_model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
# Create a quantizer from a ORTModelForXXX
>>> quantizer = ORTQuantizer.from_pretrained(ort_model)
# Configuration
>>> ...
# Quantize the model
>>> quantizer.quantize(...)
>>> from optimum.onnxruntime import ORTQuantizer
# This assumes a model.onnx exists in path/to/model
>>> quantizer = ORTQuantizer.from_pretrained("path/to/model")
# Configuration
>>> ...
# Quantize the model
>>> quantizer.quantize(...)
The ORTQuantizer
class can be used to dynamically quantize your ONNX model. Below you will find an easy end-to-end example on how to dynamically quantize distilbert-base-uncased-finetuned-sst-2-english.
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig
>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
# Load PyTorch model and convert to ONNX
>>> onnx_model = ORTModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)
# Create quantizer
>>> quantizer = ORTQuantizer.from_pretrained(onnx_model)
# Define the quantization strategy by creating the appropriate configuration
>>> dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
# Quantize the model
>>> model_quantized_path = quantizer.quantize(
save_dir="path/to/output/model",
quantization_config=dqconfig,
)
The ORTQuantizer
class can be used to statically quantize your ONNX model. Below you will find an easy end-to-end example on how to statically quantize distilbert-base-uncased-finetuned-sst-2-english.
>>> from functools import partial
>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSequenceClassification
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig, AutoCalibrationConfig
>>> model_id = "distilbert-base-uncased-finetuned-sst-2-english"
# Load PyTorch model and convert to ONNX and create Quantizer and setup config
>>> onnx_model = ORTModelForSequenceClassification.from_pretrained(model_id, from_transformers=True)
>>> tokenizer = AutoTokenizer.from_pretrained(model_id)
>>> quantizer = ORTQuantizer.from_pretrained(onnx_model)
>>> qconfig = AutoQuantizationConfig.arm64(is_static=True, per_channel=False)
# Create the calibration dataset
>>> def preprocess_fn(ex, tokenizer):
return tokenizer(ex["sentence"])
>>> calibration_dataset = quantizer.get_calibration_dataset(
"glue",
dataset_config_name="sst2",
preprocess_function=partial(preprocess_fn, tokenizer=tokenizer),
num_samples=50,
dataset_split="train",
)
# Create the calibration configuration containing the parameters related to calibration.
>>> calibration_config = AutoCalibrationConfig.minmax(calibration_dataset)
# Perform the calibration step: computes the activations quantization ranges
>>> ranges = quantizer.fit(
dataset=calibration_dataset,
calibration_config=calibration_config,
operators_to_quantize=qconfig.operators_to_quantize,
)
# Apply static quantization on the model
>>> model_quantized_path = quantizer.quantize(
save_dir="path/to/output/model",
calibration_tensors_range=ranges,
quantization_config=qconfig,
)
The ORTQuantizer
currently doesn’t support multi-file models, like ORTModelForSeq2SeqLM
. If you want to quantize a Seq2Seq model, you have to quantize each model’s component individually using the ORTQuantizer
class. Currently, only dynamic quantization is supported for Seq2Seq model.
ORTModelForSeq2SeqLM
. >>> from optimum.onnxruntime import ORTQuantizer, ORTModelForSeq2SeqLM
>>> from optimum.onnxruntime.configuration import AutoQuantizationConfig
# load Seq2Seq model and set model file directory
>>> model_id = "optimum/t5-small"
>>> onnx_model = ORTModelForSeq2SeqLM.from_pretrained(model_id)
>>> model_dir = onnx_model.model_save_dir
# Create encoder quantizer
>>> encoder_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="encoder_model.onnx")
# Create decoder quantizer
>>> decoder_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="decoder_model.onnx")
# Create decoder with past key values quantizer
>>> decoder_wp_quantizer = ORTQuantizer.from_pretrained(model_dir, file_name="decoder_with_past_model.onnx")
# Create Quantizer list
>>> quantizer = [encoder_quantizer, decoder_quantizer, decoder_wp_quantizer]
# Define the quantization strategy by creating the appropriate configuration
>>> dqconfig = AutoQuantizationConfig.avx512_vnni(is_static=False, per_channel=False)
# Quantize the model
>>> [q.quantize(save_dir=".",quantization_config=dqconfig) for q in quantizer]