Configuration classes for ONNX exports

Exporting a model to ONNX involves specifying:

  1. The input names.
  2. The output names.
  3. The dynamic axes. These refer to the input dimensions can be changed dynamically at runtime (e.g. a batch size or sequence length). All other axes will be treated as static, and hence fixed at runtime.
  4. Dummy inputs to trace the model. This is needed in PyTorch to record the computational graph and convert it to ONNX.

Since this data depends on the choice of model and task, we represent it in terms of configuration classes. Each configuration class is associated with a specific model architecture, and follows the naming convention ArchitectureNameOnnxConfig. For instance, the configuration which specifies the ONNX export of BERT models is BertOnnxConfig.

Since many architectures share similar properties for their ONNX configuration, 🤗 Optimum adopts a 3-level class hierarchy:

  1. Abstract and generic base classes. These handle all the fundamental features, while being agnostic to the modality (text, image, audio, etc).
  2. Middle-end classes. These are aware of the modality, but multiple can exist for the same modality depending on the inputs they support. They specify which input generators should be used for the dummy inputs, but remain model-agnostic.
  3. Model-specific classes like the BertOnnxConfig mentioned above. These are the ones actually used to export models.

Base classes

class optimum.exporters.onnx.OnnxConfig

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )

Parameters

  • config (transformers.PretrainedConfig) — The model configuration.
  • task (str, defaults to "default") — The task the model should be exported for.

Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format.

Class attributes:

create_dummy_input_generator_classes

< >

( )

Instantiates the dummy input generators from self.DUMMY_INPUT_GENERATOR_CLASSES.

Each dummy input generator is independent, so this method instantiates the first generator, and forces the other generators to use the same batch size, meaning they will all produce inputs of the same batch size. Override this method for custom behavior.

flatten_output_collection_property

< >

( name: str field: typing.Iterable[typing.Any] ) → Dict[str, Any]

Parameters

  • name (str) — The name of the nested structure.
  • field (Iterable[Any]) — The structure to potentially flattened.

Returns

Dict[str, Any]

Outputs with flattened structure and key mapping this new structure.

Flattens any potential nested structure expanding the name of the field with the index of the element within the structure.

generate_dummy_inputs

< >

( framework: str = 'pt' ) → Dict

Parameters

  • framework (str, defaults to "pt") — The framework for which to create the dummy inputs.

Returns

Dict

A dictionary mapping the input names to dummy tensors in the proper framework format.

Generates the dummy inputs necessary for tracing the model.

ordered_inputs

< >

( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] ) → Mapping[str, Mappingp[int, str]]

Parameters

Returns

Mapping[str, Mappingp[int, str]]

The properly ordered inputs.

Re-orders the inputs using the model forward pass signature.

class optimum.exporters.onnx.OnnxConfigWithPast

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )

add_past_key_values

< >

( inputs_or_outputs: typing.Mapping[str, typing.Mapping[int, str]] direction: str )

Parameters

  • inputs_or_outputs (Mappping[str, Mapping[int str]]) — The mapping to fill.
  • direction (str) — either “inputs” or “outputs”, it specifies whether input_or_outputs is the input mapping or the output mapping, this is important for axes naming.

Fills input_or_outputs mapping with past_key_values dynamic axes considering the direction.

with_past

< >

( config: PretrainedConfig task: str = 'default' ) → OnnxConfig

Parameters

  • config (transformers.PretrainedConfig) — The underlying model’s config to use when exporting to ONNX.
  • task (str, defaults to "default") — The task the model should be exported for.

Returns

OnnxConfig

The onnx config with .use_past = True

Instantiates a OnnxConfig with use_past attribute set to True.

class optimum.exporters.onnx.OnnxSeq2SeqConfigWithPast

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )

Middle-end classes

Text

class optimum.exporters.onnx.TextEncoderOnnxConfig

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )

Handles encoder-based text architectures.

class optimum.exporters.onnx.TextDecoderOnnxConfig

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )

Handles decoder-based text architectures.

class optimum.exporters.onnx.TextSeq2SeqOnnxConfig

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[optimum.exporters.onnx.base.PatchingSpec] = None use_past: bool = False )

Handles encoder-decoder-based text architectures.

Vision

class optimum.exporters.onnx.config.VisionOnnxConfig

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )

Handles vision architectures.

Multi-modal

class optimum.exporters.onnx.config.TextAndVisionOnnxConfig

< >

( config: PretrainedConfig task: str = 'default' patching_specs: typing.Optional[typing.List[optimum.exporters.onnx.base.PatchingSpec]] = None )

Handles multi-modal text and vision architectures.

Supported architectures