The Tasks Manager

Exporting a model from one framework to some format (also called backend here) involves specifying inputs and outputs information that the export function needs. The way optimum.exporters is structured for each backend is as follows:

The role of the TasksManager is to be the main entry-point to load a model given a name and a task, and to get the proper configuration for a given (architecture, backend) couple. That way, there is a centralized place to register the task -> model class and (architecture, backend) -> configuration mappings. This allows the export functions to use this, and to rely on the various checks it provides.

Task names

The tasks supported might depend on the backend, but here are the mappings between a task name and the auto class for both PyTorch and TensorFlow.

It is possible to know which tasks are supported for a model for a given backend, by doing:

>>> from optimum.exporters.tasks import TasksManager

>>> model_type = "distilbert"
>>> # For instance, for the ONNX export.
>>> backend = "onnx"
>>> distilbert_tasks = list(TasksManager.get_supported_tasks_for_model_type(model_type, backend).keys())

>>> print(distilbert_tasks)
["default", "masked-lm", "causal-lm", "sequence-classification", "token-classification", "question-answering"]

PyTorch

Task Auto Class
causal-lm, causal-lm-with-past AutoModelForCausalLM
default, default-with-past AutoModel
masked-lm AutoModelForMaskedLM
question-answering AutoModelForQuestionAnswering
seq2seq-lm, seq2seq-lm-with-past AutoModelForSeq2SeqLM
sequence-classification AutoModelForSequenceClassification
token-classification AutoModelForTokenClassification
multiple-choice AutoModelForMultipleChoice
image-classification AutoModelForImageClassification
object-detection AutoModelForObjectDetection
image-segmentation AutoModelForImageSegmentation
masked-im AutoModelForMaskedImageModeling
semantic-segmentation AutoModelForSemanticSegmentation
speech2seq-lm AutoModelForSpeechSeq2Seq

TensorFlow

Task Auto Class
causal-lm, causal-lm-with-past TFAutoModelForCausalLM
default, default-with-past TFAutoModel
masked-lm TFAutoModelForMaskedLM
question-answering TFAutoModelForQuestionAnswering
seq2seq-lm, seq2seq-lm-with-past TFAutoModelForSeq2SeqLM
sequence-classification TFAutoModelForSequenceClassification
token-classification TFAutoModelForTokenClassification
multiple-choice TFAutoModelForMultipleChoice
semantic-segmentation TFAutoModelForSemanticSegmentation

Reference

class optimum.exporters.TasksManager

< >

( )

Handles the task name -> model class and architecture -> configuration mappings.

determine_framework

< >

( model: str framework: str = None ) str

Parameters

  • model (str) — The name of the model to export.
  • framework (str, optional) — The framework to use for the export. See above for priority if none provided.

Returns

str

The framework to use for the export.

Determines the framework to use for the export.

The priority is in the following order:

  1. User input via framework.
  2. If local checkpoint is provided, use the same framework as the checkpoint.
  3. If model repo, try to infer the framework from the Hub.
  4. If could not infer, use available framework in environment, with priority given to PyTorch.

get_exporter_config_constructor

< >

( model_type: str exporter: str task: str = 'default' model_name: typing.Optional[str] = None ) ExportConfigConstructor

Parameters

  • model_type (str) — The model type to retrieve the config for.
  • exporter (str) — The exporter to use.
  • task (str, optional, defaults to "default") — The task to retrieve the config for.

Returns

ExportConfigConstructor

The ExportConfig constructor for the requested backend.

Gets the ExportConfigConstructor for a model type and task combination.

get_model_class_for_task

< >

( task: str framework: str = 'pt' )

Parameters

  • task (str) — The task required.
  • framework (str, optional, defaults to "pt") — The framework to use for the export.

Attempts to retrieve an AutoModel class from a task name.

get_model_from_task

< >

( task: str model: str framework: str = None cache_dir: str = None )

Parameters

  • task (str) — The task required.
  • model (str) — The name of the model to export.
  • framework (str, optional) — The framework to use for the export. See TasksManager.determine_framework for the priority should none be provided.
  • cache_dir (str, optional) — Path to a directory in which a downloaded pretrained model weights have been cached if the standard cache should not be used.

Retrieves a model from its name and the task to be enabled.

get_supported_tasks_for_model_type

< >

( model_type: str exporter: str model_name: typing.Optional[str] = None ) TaskNameToExportConfigDict

Parameters

  • model_type (str) — The model type to retrieve the supported tasks for.
  • exporter (str) — The name of the exporter.
  • model_name (str, optional) — The name attribute of the model object, only used for the exception message.

Returns

TaskNameToExportConfigDict

The dictionary mapping each task to a corresponding ExportConfig constructor.

Retrieves the task -> exporter backend config constructors map from the model type.

infer_task_from_model

< >

( model_name_or_path: str ) str

Parameters

  • model_name_or_path (str) — The model repo or local path (not supported for now).

Returns

str

The task name automatically detected from the model repo.

Infers the task from the model repo.