Exporting a model to ONNX involves specifying:
Since this data depends on the choice of model and task, we represent it in terms of configuration classes. Each configuration class is associated with
a specific model architecture, and follows the naming convention ArchitectureNameOnnxConfig
. For instance, the configuration which specifies the ONNX
export of BERT models is BertOnnxConfig
.
Since many architectures share similar properties for their ONNX configuration, 🤗 Optimum adopts a 3-level class hierarchy:
BertOnnxConfig
mentioned above. These are the ones actually used to export models.( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )
Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format.
Class attributes:
Type
) — A class derived from NormalizedConfig specifying how to
normalize the model config.Tuple[Type]
) — A tuple of classes derived from
~optimum.utils.DummyInputGenerator
specifying how to create dummy inputs.Union[float, Dict[str, float]]
) — A float or a dictionary mapping task names to float,
where the float values represent the absolute tolerance value to use during model conversion validation.int
, defaults to 11) — The default ONNX opset to use for the ONNX export.packaging.version.Version
, defaults to MIN_TORCH_VERSION()
) — The
minimum torch version supporting the export of the model to ONNX.Instantiates the dummy input generators from self.DUMMY_INPUT_GENERATOR_CLASSES
.
Each dummy input generator is independent, so this method instantiates the first generator, and forces the other generators to use the same batch size, meaning they will all produce inputs of the same batch size. Override this method for custom behavior.
(
name: str
field: typing.Iterable[typing.Any]
)
→
Dict[str, Any]
Flattens any potential nested structure expanding the name of the field with the index of the element within the structure.
(
framework: str = 'pt'
)
→
Dict
Generates the dummy inputs necessary for tracing the model.
(
model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')]
)
→
Mapping[str, Mappingp[int, str]]
Parameters
Returns
Mapping[str, Mappingp[int, str]]
The properly ordered inputs.
Re-orders the inputs using the model forward pass signature.
( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )
( inputs_or_outputs: typing.Mapping[str, typing.Mapping[int, str]] direction: str )
Fills input_or_outputs
mapping with past_key_values dynamic axes considering the direction.
( config: PretrainedConfig task: str = 'default' ) → OnnxConfig
Instantiates a OnnxConfig with use_past
attribute set to True
.
( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )
( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )
Handles encoder-based text architectures.
( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )
Handles decoder-based text architectures.
( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )
Handles encoder-decoder-based text architectures.
( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )
Handles vision architectures.
( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )
Handles multi-modal text and vision architectures.