Optimum documentation

Configuration classes for TFLite export

You are viewing v1.18.1 version. A newer version v1.23.3 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Configuration classes for TFLite export

Base classes

class optimum.exporters.tflite.TFLiteConfig

< >

( config: PretrainedConfig task: str batch_size: int = 1 sequence_length: typing.Optional[int] = None num_choices: typing.Optional[int] = None width: typing.Optional[int] = None height: typing.Optional[int] = None num_channels: typing.Optional[int] = None feature_size: typing.Optional[int] = None nb_max_frames: typing.Optional[int] = None audio_sequence_length: typing.Optional[int] = None point_batch_size: typing.Optional[int] = None nb_points_per_image: typing.Optional[int] = None )

Parameters

  • config (transformers.PretrainedConfig) — The model configuration.
  • task (str, defaults to "feature-extraction") — The task the model should be exported for.
  • The rest of the arguments are used to specify the shape of the inputs the model can take. —
  • They are required or not depending on the model the TFLiteConfig is designed for. —

Base class for TFLite exportable model describing metadata on how to export the model through the TFLite format.

Class attributes:

  • NORMALIZED_CONFIG_CLASS (Type) — A class derived from NormalizedConfig specifying how to normalize the model config.

  • DUMMY_INPUT_GENERATOR_CLASSES (Tuple[Type]) — A tuple of classes derived from DummyInputGenerator specifying how to create dummy inputs.

  • ATOL_FOR_VALIDATION (Union[float, Dict[str, float]]) — A float or a dictionary mapping task names to float, where the float values represent the absolute tolerance value to use during model conversion validation.

  • MANDATORY_AXES (Tuple[Union[str, Tuple[Union[str, Tuple[str]]]]]) — A tuple where each element is either:

    • An axis name, for instance “batch_size” or “sequence_length”, that indicates that the axis dimension is needed to export the model,
    • Or a tuple containing two elements:
      • The first one is either a string or a tuple of strings and specifies for which task(s) the axis is needed
      • The second one is the axis name.

    For example: MANDATORY_AXES = ("batch_size", "sequence_length", ("multiple-choice", "num_choices")) means that to export the model, the batch size and sequence length values always need to be specified, and that a value for the number of possible choices is needed when the task is multiple-choice.

inputs

< >

( ) List[str]

Returns

List[str]

A list of input names.

List containing the names of the inputs the exported model should take.

outputs

< >

( ) List[str]

Returns

List[str]

A list of output names.

List containing the names of the outputs the exported model should have.

generate_dummy_inputs

< >

( ) Dict[str, tf.Tensor]

Returns

Dict[str, tf.Tensor]

A dictionary mapping input names to dummy tensors.

Generates dummy inputs that the exported model should be able to process. This method is actually used to determine the input specs that are needed for the export.

Middle-end classes

class optimum.exporters.tflite.config.TextEncoderTFliteConfig

< >

( config: PretrainedConfig task: str batch_size: int = 1 sequence_length: typing.Optional[int] = None num_choices: typing.Optional[int] = None width: typing.Optional[int] = None height: typing.Optional[int] = None num_channels: typing.Optional[int] = None feature_size: typing.Optional[int] = None nb_max_frames: typing.Optional[int] = None audio_sequence_length: typing.Optional[int] = None point_batch_size: typing.Optional[int] = None nb_points_per_image: typing.Optional[int] = None )

Handles encoder-based text architectures.

class optimum.exporters.tflite.config.VisionTFLiteConfig

< >

( config: PretrainedConfig task: str batch_size: int = 1 sequence_length: typing.Optional[int] = None num_choices: typing.Optional[int] = None width: typing.Optional[int] = None height: typing.Optional[int] = None num_channels: typing.Optional[int] = None feature_size: typing.Optional[int] = None nb_max_frames: typing.Optional[int] = None audio_sequence_length: typing.Optional[int] = None point_batch_size: typing.Optional[int] = None nb_points_per_image: typing.Optional[int] = None )

Handles vision architectures.