Export functions
You can to export models to ONNX from two frameworks in 🤗 Optimum: PyTorch and TensorFlow. There is an export function for each of these frameworks, export_pytorch() and export_tensorflow(), but the recommended way of using those is via the main export function export(), which will take care of using the proper exporting function according to the available framework on the system and on the model class.
Main functions
optimum.exporters.onnx.convert.export_pytorch
< source >(
model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('ModelMixin')]
config: OnnxConfig
opset: int
output: Path
device: str = 'cpu'
input_shapes: typing.Optional[typing.Dict] = None
)
→
Tuple[List[str], List[str]]
Parameters
-
model (
PreTrainedModel
) — The model to export. - config (OnnxConfig) — The ONNX configuration associated with the exported model.
-
opset (
int
) — The version of the ONNX operator set to use. -
output (
Path
) — Directory to store the exported ONNX model. -
device (
str
, defaults to"cpu"
) — The device on which the ONNX model will be exported. Eithercpu
orcuda
. Only PyTorch is supported for export on CUDA devices. -
input_shapes (
optional[Dict]
, defaults toNone
) — If specified, allows to use specific shapes for the example input provided to the ONNX exporter.
Returns
Tuple[List[str], List[str]]
A tuple with an ordered list of the model’s inputs, and the named inputs from the ONNX configuration.
Exports a PyTorch model to an ONNX Intermediate Representation.
optimum.exporters.onnx.convert.export_tensorflow
< source >(
model: TFPreTrainedModel
config: OnnxConfig
opset: int
output: Path
)
→
Tuple[List[str], List[str]]
Parameters
-
model (
TFPreTrainedModel
) — The model to export. - config (OnnxConfig) — The ONNX configuration associated with the exported model.
-
opset (
int
) — The version of the ONNX operator set to use. -
output (
Path
) — Directory to store the exported ONNX model. -
device (
str
, optional, defaults tocpu
) — The device on which the ONNX model will be exported. Eithercpu
orcuda
. Only PyTorch is supported for export on CUDA devices.
Returns
Tuple[List[str], List[str]]
A tuple with an ordered list of the model’s inputs, and the named inputs from the ONNX configuration.
Exports a TensorFlow model to an ONNX Intermediate Representation.
optimum.exporters.onnx.export
< source >(
model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), ForwardRef('ModelMixin')]
config: OnnxConfig
output: Path
opset: typing.Optional[int] = None
device: str = 'cpu'
input_shapes: typing.Optional[typing.Dict] = None
)
→
Tuple[List[str], List[str]]
Parameters
-
model (
PreTrainedModel
orTFPreTrainedModel
) — The model to export. - config (OnnxConfig) — The ONNX configuration associated with the exported model.
-
output (
Path
) — Directory to store the exported ONNX model. -
opset (
Optional[int]
, defaults toNone
) — The version of the ONNX operator set to use. -
device (
str
, optional, defaults tocpu
) — The device on which the ONNX model will be exported. Eithercpu
orcuda
. Only PyTorch is supported for export on CUDA devices. -
input_shapes (
Optional[Dict]
, defaults toNone
) — If specified, allows to use specific shapes for the example input provided to the ONNX exporter.
Returns
Tuple[List[str], List[str]]
A tuple with an ordered list of the model’s inputs, and the named inputs from the ONNX configuration.
Exports a Pytorch or TensorFlow model to an ONNX Intermediate Representation.
Utility functions
optimum.exporters.onnx.convert.check_dummy_inputs_are_allowed
< source >( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), ForwardRef('ModelMixin')] dummy_input_names: typing.Iterable[str] )
Checks that the dummy inputs from the ONNX config is a subset of the allowed inputs for model
.
optimum.exporters.onnx.validate_model_outputs
< source >( config: OnnxConfig reference_model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel'), ForwardRef('ModelMixin')] onnx_model: Path onnx_named_outputs: typing.List[str] atol: typing.Optional[float] = None input_shapes: typing.Optional[typing.Dict] = None device: str = 'cpu' )
Parameters
-
config (
~OnnxConfig
— The configuration used to export the model. -
reference_model (
~PreTrainedModel
or~TFPreTrainedModel
) — The model used for the export. -
onnx_model (
Path
) — The path to the exported model. -
onnx_named_outputs (
List[str]
) — The names of the outputs to check. -
atol (
Optional[float]
, defaults toNone
) — The absolute tolerance in terms of outputs difference between the reference and the exported model. -
input_shapes (
Optional[Dict]
, defaults toNone
) — If specified, allows to use specific shapes to validate the ONNX model on. -
device (
str
, defaults to"cpu"
) — The device on which the ONNX model will be validated. Eithercpu
orcuda
. Validation on a CUDA device is supported only for PyTorch.
Raises
ValueError
ValueError
— If the outputs shapes or values do not match between the reference and the exported model.
Validates the export by checking that the outputs from both the reference and the exported model match.