The configuration classes are the way to specify how a task should be done. There are two tasks supported with the ONNX Runtime package:
Optimization: Performed by the ORTOptimizer, this task can be tweaked using an OptimizationConfig.
Quantization: Performed by the ORTQuantizer, quantization can be set using a QuantizationConfig. A calibration step is required in some cases (post training static quantization), which can be specified using a CalibrationConfig.
( optimization_level: int = 1 optimize_for_gpu: bool = False fp16: bool = False optimize_with_onnxruntime_only: typing.Optional[bool] = None enable_transformers_specific_optimizations: bool = True disable_gelu: typing.Optional[bool] = None disable_gelu_fusion: bool = False disable_layer_norm: typing.Optional[bool] = None disable_layer_norm_fusion: bool = False disable_attention: typing.Optional[bool] = None disable_attention_fusion: bool = False disable_skip_layer_norm: typing.Optional[bool] = None disable_skip_layer_norm_fusion: bool = False disable_bias_skip_layer_norm: typing.Optional[bool] = None disable_bias_skip_layer_norm_fusion: bool = False disable_bias_gelu: typing.Optional[bool] = None disable_bias_gelu_fusion: bool = False disable_embed_layer_norm: bool = True disable_embed_layer_norm_fusion: bool = True enable_gelu_approximation: bool = False use_mask_index: bool = False no_attention_mask: bool = False disable_shape_inference: bool = False )
Parameters
int
, defaults to 1) —
Optimization level performed by ONNX Runtime of the loaded graph.
Supported optimization level are 0, 1, 2 and 99.bool
, defaults to False
) —
Whether to optimize the model for GPU inference.
The optimized graph might contain operators for GPU or CPU only when optimization_level
> 1.
bool
, defaults to False
) —
Whether all weights and nodes should be converted from float32 to float16.
bool
, defaults to True
) —
Whether to only use transformers
specific optimizations on top of ONNX Runtime general optimizations.
bool
, defaults to False
) —
Whether to disable the Gelu fusion.
bool
, defaults to False
) —
Whether to disable Layer Normalization fusion.
bool
, defaults to False
) —
Whether to disable Attention fusion.
bool
, defaults to False
) —
Whether to disable SkipLayerNormalization fusion.
bool
, defaults to False
) —
Whether to disable Add Bias and SkipLayerNormalization fusion.
bool
, defaults to False
) —
Whether to disable Add Bias and Gelu / FastGelu fusion.
bool
, defaults to True
) —
Whether to disable EmbedLayerNormalization fusion.
The default value is set to True
since this fusion is incompatible with ONNX Runtime quantization.
bool
, defaults to False
) —
Whether to enable Gelu / BiasGelu to FastGelu conversion.
The default value is set to False
since this approximation might slightly impact the model’s accuracy.
bool
, defaults to False
) —
Whether to use mask index instead of raw attention mask in the attention operator.
bool
, defaults to False
) —
Whether to not use attention masks. Only works for bert model type.
bool
, defaults to True
) —
Whether to disable EmbedLayerNormalization fusion.
The default value is set to True
since this fusion is incompatible with ONNX Runtime quantization
bool
, defaults to False
) —
Whether to disable symbolic shape inference.
The default value is set to False
but symbolic shape inference might cause issues sometimes.
OptimizationConfig is the configuration class handling all the ONNX Runtime optimization parameters. There are two stacks of optimizations:
Factory to create common OptimizationConfig
.
(
for_gpu: bool = False
**kwargs
)
→
OptimizationConfig
Parameters
bool
, optional, defaults to False
) —
Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
will run on. Only needed for optimization_level > 1.
Dict[str, Any]
, optional) —
Arguments to provide to the ~OptimizationConfig
constructor.
Returns
OptimizationConfig
The OptimizationConfig
corresponding to the O1 optimization level.
Creates an O1 ~OptimizationConfig
.
(
for_gpu: bool = False
**kwargs
)
→
OptimizationConfig
Parameters
bool
, optional, defaults to False
) —
Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
will run on. Only needed for optimization_level > 1.
Dict[str, Any]
, optional) —
Arguments to provide to the ~OptimizationConfig
constructor.
Returns
OptimizationConfig
The OptimizationConfig
corresponding to the O2 optimization level.
Creates an O2 ~OptimizationConfig
.
(
for_gpu: bool = False
**kwargs
)
→
OptimizationConfig
Parameters
bool
, optional, defaults to False
) —
Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
will run on. Only needed for optimization_level > 1.
Dict[str, Any]
, optional) —
Arguments to provide to the ~OptimizationConfig
constructor.
Returns
OptimizationConfig
The OptimizationConfig
corresponding to the O3 optimization level.
Creates an O3 ~OptimizationConfig
.
(
for_gpu: bool = False
**kwargs
)
→
OptimizationConfig
Parameters
bool
, optional, defaults to False
) —
Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
will run on. Only needed for optimization_level > 1.
Dict[str, Any]
, optional) —
Arguments to provide to the ~OptimizationConfig
constructor.
Returns
OptimizationConfig
The OptimizationConfig
corresponding to the O4 optimization level.
Creates an O4 ~OptimizationConfig
.
(
optimization_level: str
for_gpu: bool = False
**kwargs
)
→
OptimizationConfig
Parameters
str
) —
The optimization level, the following values are allowed:bool
, optional, defaults to False
) —
Whether the model to optimize will run on GPU, some optimizations depends on the hardware the model
will run on. Only needed for optimization_level > 1.
Dict[str, Any]
, optional) —
Arguments to provide to the ~OptimizationConfig
constructor.
Returns
OptimizationConfig
The OptimizationConfig
corresponding to the requested optimization level.
Creates an ~OptimizationConfig
with pre-defined arguments according to an optimization level.
( is_static: bool format: QuantFormat mode: QuantizationMode = <QuantizationMode.QLinearOps: 1> activations_dtype: QuantType = <QuantType.QUInt8: 1> activations_symmetric: bool = False weights_dtype: QuantType = <QuantType.QInt8: 0> weights_symmetric: bool = True per_channel: bool = False reduce_range: bool = False nodes_to_quantize: typing.List[str] = <factory> nodes_to_exclude: typing.List[str] = <factory> operators_to_quantize: typing.List[str] = <factory> qdq_add_pair_to_weight: bool = False qdq_dedicated_pair: bool = False qdq_op_type_per_channel_support_to_axis: typing.Dict[str, int] = <factory> )
Parameters
bool
) —
Whether to apply static quantization or dynamic quantization.
QuantFormat
) —
Targeted ONNX Runtime quantization representation format.
For the Operator Oriented (QOperator) format, all the quantized operators have their own ONNX definitions.
For the Tensor Oriented (QDQ) format, the model is quantized by inserting QuantizeLinear / DeQuantizeLinear
operators.
QuantizationMode
, defaults to QuantizationMode.QLinearOps
) —
Targeted ONNX Runtime quantization mode, default is QLinearOps to match QDQ format.
When targeting dynamic quantization mode, the default value is QuantizationMode.IntegerOps
whereas the
default value for static quantization mode is QuantizationMode.QLinearOps
.
QuantType
, defaults to QuantType.QUInt8
) —
The quantization data types to use for the activations.
bool
, defaults to False
) —
Whether to apply symmetric quantization on the activations.
QuantType
, defaults to QuantType.QInt8
) —
The quantization data types to use for the weights.
bool
, defaults to True
) —
Whether to apply symmetric quantization on the weights.
bool
, defaults to False
) —
Whether we should quantize per-channel (also known as “per-row”). Enabling this can increase overall
accuracy while making the quantized model heavier.
bool
, defaults to False
) —
Whether to use reduce-range 7-bits integers instead of 8-bits integers.
list
) —
List of the nodes names to quantize.
list
) —
List of the nodes names to exclude when applying quantization.
list
, defaults to ["MatMul", "Add"]
) —
List of the operators types to quantize.
bool
, defaults to False
) —
By default, floating-point weights are quantized and feed to solely inserted DeQuantizeLinear node.
If set to True, the floating-point weights will remain and both QuantizeLinear / DeQuantizeLinear nodes
will be inserted.
bool
, defaults to False
) —
When inserting QDQ pair, multiple nodes can share a single QDQ pair as their inputs. If True, it will
create an identical and dedicated QDQ pair for each node.
Dict[str, int]
) —
Set the channel axis for a specific operator type. Effective only when per channel quantization is
supported and per_channel
is set to True.
QuantizationConfig is the configuration class handling all the ONNX Runtime quantization parameters.
( dataset_name: str dataset_config_name: str dataset_split: str dataset_num_samples: int method: CalibrationMethod num_bins: typing.Optional[int] = None num_quantized_bins: typing.Optional[int] = None percentile: typing.Optional[float] = None moving_average: typing.Optional[bool] = None averaging_constant: typing.Optional[float] = None )
Parameters
str
) —
The name of the calibration dataset.
str
) —
The name of the calibration dataset configuration.
str
) —
Which split of the dataset is used to perform the calibration step.
int
) —
The number of samples composing the calibration dataset.
CalibrationMethod
) —
The method chosen to calculate the activations quantization parameters using the calibration dataset.
int
, optional) —
The number of bins to use when creating the histogram when performing the calibration step using the
Percentile or Entropy method.
int
, optional) —
The number of quantized bins to use when performing the calibration step using the Entropy method.
float
, optional) —
The percentile to use when computing the activations quantization ranges when performing the calibration
step using the Percentile method.
bool
, optional) —
Whether to compute the moving average of the minimum and maximum values when performing the calibration step
using the MinMax method.
float
, optional) —
The constant smoothing factor to use when computing the moving average of the minimum and maximum values.
Effective only when the MinMax calibration method is selected and moving_average
is set to True.
CalibrationConfig is the configuration class handling all the ONNX Runtime parameters related to the calibration step of static quantization.
( opset: typing.Optional[int] = None use_external_data_format: bool = False one_external_file: bool = True optimization: typing.Optional[optimum.onnxruntime.configuration.OptimizationConfig] = None quantization: typing.Optional[optimum.onnxruntime.configuration.QuantizationConfig] = None **kwargs )
Parameters
int
, optional) —
ONNX opset version to export the model with.
bool
, optional, defaults to False
) —
Allow exporting model >= than 2Gb.
bool
, defaults to True
) —
When use_external_data_format=True
, whether to save all tensors to one external file.
If false, save each tensor to a file named with the tensor name.
(Can not be set to False
for the quantization)
OptimizationConfig
, optional, defaults to None) —
Specify a configuration to optimize ONNX Runtime model
QuantizationConfig
, optional, defaults to None) —
Specify a configuration to quantize ONNX Runtime model
ORTConfig is the configuration class handling all the ONNX Runtime parameters related to the ONNX IR model export, optimization and quantization parameters.