Handles the ONNX Runtime quantization process for models shared on huggingface.co/models.
( dataset: Dataset calibration_config: CalibrationConfig onnx_augmented_model_name: str = 'augmented_model.onnx' operators_to_quantize: typing.Optional[typing.List[str]] = None batch_size: int = 1 use_external_data_format: bool = False use_gpu: bool = False force_symmetric_range: bool = False )
Parameters
Dataset
) —
The dataset to use when performing the calibration step.
CalibrationConfig
) —
The configuration containing the parameters related to the calibration step.
Union[str, os.PathLike]
) —
The path used to save the augmented model used to collect the quantization ranges.
list
, optional) —
List of the operators types to quantize.
int
, defaults to 1) —
The batch size to use when collecting the quantization ranges values.
bool
, defaults to False
) —
Whether uto se external data format to store model which size is >= 2Gb.
bool
, defaults to False
) —
Whether to use the GPU when collecting the quantization ranges values.
bool
, defaults to False
) —
Whether to make the quantization ranges symmetric.
Perform the calibration step and collect the quantization ranges.
( model_or_path: typing.Union[str, pathlib.Path] file_name: typing.Optional[str] = None )
Parameters
Union[str, Path]
) —
Can be either:ORTModelForXX
class, e.g., ORTModelForQuestionAnswering
., *optional*) -- Overwrites the default model file name from
“model.onnx”to
file_name`.
This allows you to load different model files from the same repository or directory.
Instantiate a ORTQuantizer
from a pretrained pytorch model and preprocessor.
( dataset_name: str num_samples: int = 100 dataset_config_name: typing.Optional[str] = None dataset_split: typing.Optional[str] = None preprocess_function: typing.Optional[typing.Callable] = None preprocess_batch: bool = True seed: int = 2016 use_auth_token: bool = False )
Parameters
str
) —
The dataset repository name on the Hugging Face Hub or path to a local directory containing data files
to load to use for the calibration step.
int
, defaults to 100) —
The maximum number of samples composing the calibration dataset.
str
, optional) —
The name of the dataset configuration.
str
, optional) —
Which split of the dataset to use to perform the calibration step.
Callable
, optional) —
Processing function to apply to each example after loading dataset.
bool
, defaults to True
) —
Whether the preprocess_function
should be batched.
int
, defaults to 2016) —
The random seed to use when shuffling the calibration dataset.
bool
, defaults to False
) —
Whether to use the token generated when running transformers-cli login
(necessary for some datasets
like ImageNet).
Create the calibration datasets.Dataset
to use for the post-training static quantization calibration step
( dataset: Dataset calibration_config: CalibrationConfig onnx_augmented_model_name: str = 'augmented_model.onnx' operators_to_quantize: typing.Optional[typing.List[str]] = None batch_size: int = 1 use_external_data_format: bool = False use_gpu: bool = False force_symmetric_range: bool = False )
Parameters
Dataset
) —
The dataset to use when performing the calibration step.
CalibrationConfig
) —
The configuration containing the parameters related to the calibration step.
Union[str, os.PathLike]
) —
The path used to save the augmented model used to collect the quantization ranges.
list
, optional) —
List of the operators types to quantize.
int
, defaults to 1) —
The batch size to use when collecting the quantization ranges values.
bool
, defaults to False
) —
Whether uto se external data format to store model which size is >= 2Gb.
bool
, defaults to False
) —
Whether to use the GPU when collecting the quantization ranges values.
bool
, defaults to False
) —
Whether to make the quantization ranges symmetric.
Perform the calibration step and collect the quantization ranges.
( quantization_config: QuantizationConfig save_dir: typing.Union[str, pathlib.Path] file_suffix: typing.Optional[str] = 'quantized' calibration_tensors_range: typing.Union[typing.Dict[str, typing.Tuple[float, float]], NoneType] = None use_external_data_format: bool = False preprocessor: typing.Optional[optimum.onnxruntime.preprocessors.quantization.QuantizationPreprocessor] = None )
Parameters
QuantizationConfig
) —
The configuration containing the parameters related to quantization.
Union[str, Path]
) —
The directory where the quantized model should be saved.
str
, optional, defaults to "quantized"
) —
The file_suffix used to save the quantized model.
Dict[NodeName, Tuple[float, float]]
, optional) —
The dictionary mapping the nodes name to their quantization ranges, used and required only when applying
static quantization.
bool
, defaults to False
) —
Whether to use external data format to store model which size is >= 2Gb.
QuantizationPreprocessor
, optional) —
The preprocessor to use to collect the nodes to include or exclude from quantization.
Quantize a model given the optimization specifications defined in quantization_config
.