Optimum documentation

Models

You are viewing v1.7.3 version. A newer version v1.17.1 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Models

Generic model classes

The following ORT classes are available for instantiating a base model class without a specific head.

ORTModel

class optimum.onnxruntime.ORTModel

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Base class for implementing models using ONNX Runtime.

The ORTModel implements generic methods for interacting with the Hugging Face Hub as well as exporting vanilla transformers models to ONNX using optimum.exporters.onnx toolchain.

Class attributes:

  • model_type (str, optional, defaults to "onnx_model") — The name of the model type to use when registering the ORTModel classes.
  • auto_model_class (Type, optional, defaults to AutoModel) — The “AutoModel” class to represented by the current ORTModel class.

Common attributes:

  • model (ort.InferenceSession) — The ONNX Runtime InferenceSession that is running the model.
  • config (PretrainedConfig — The configuration of the model.
  • use_io_binding (bool, optional, defaults to True) — Whether to use I/O bindings with ONNX Runtime with the CUDAExecutionProvider, this can significantly speedup inference depending on the task.
  • model_save_dir (Path) — The directory where the model exported to ONNX is saved. By defaults, if the loaded model is local, the directory where the original model will be used. Otherwise, the cache directory is used.
  • providers (`List[str]) — The list of execution providers available to ONNX Runtime.

from_pretrained

< >

( model_id: typing.Union[str, pathlib.Path] export: bool = False force_download: bool = False use_auth_token: typing.Optional[str] = None cache_dir: typing.Optional[str] = None subfolder: str = '' config: typing.Optional[ForwardRef('PretrainedConfig')] = None local_files_only: bool = False provider: str = 'CPUExecutionProvider' session_options: typing.Optional[onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions] = None provider_options: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) ORTModel

Parameters

  • model_id (Union[str, Path]) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
    • A path to a directory containing a model saved using ~OptimizedModel.save_pretrained, e.g., ./my_model_directory/.
  • from_transformers (bool, defaults to False) — Defines whether the provided model_id contains a vanilla Transformers checkpoint.
  • force_download (bool, defaults to True) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • use_auth_token (Optional[str], defaults to None) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in ~/.huggingface).
  • cache_dir (Optional[str], defaults to None) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • subfolder (str, defaults to "") — In case the relevant files are located inside a subfolder of the model repo either locally or on huggingface.co, you can specify the folder name here.
  • config (Optional[transformers.PretrainedConfig], defaults to None) — The model configuration.
  • local_files_only (Optional[bool], defaults to False) — Whether or not to only look at local files (i.e., do not try to download the model).
  • trust_remote_code (bool, defaults to False) — Whether or not to allow for custom code defined on the Hub in their own modeling. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine.
  • provider (str, defaults to "CPUExecutionProvider") — ONNX Runtime provider to use for loading the model. See https://onnxruntime.ai/docs/execution-providers/ for possible providers.
  • session_options (Optional[onnxruntime.SessionOptions], defaults to None), — ONNX Runtime session options to use for loading the model.
  • provider_options (Optional[Dict[str, Any]], defaults to None) — Provider option dictionaries corresponding to the provider used. See available options for each provider: https://onnxruntime.ai/docs/api/c/group___global.html .
  • kwargs (Dict[str, Any]) — Will be passed to the underlying model loading methods.

Parameters for decoder models (ORTModelForCausalLM, ORTModelForSeq2SeqLM, ORTModelForSeq2SeqLM, ORTModelForSpeechSeq2Seq, ORTModelForVision2Seq)

  • use_cache (Optional[bool], defaults to True) — Whether or not past key/values cache should be used. Defaults to True.

Parameters for ORTModelForCausalLM

  • use_merged (Optional[bool], defaults to None) — whether or not to use a single ONNX that handles both the decoding without and with past key values reuse. This option defaults to True if loading from a local repository and a merged decoder is found. When exporting with from_transformers=True, defaults to False. This option should be set to True to minimize memory usage.

Returns

ORTModel

The loaded ORTModel model.

Instantiate a pretrained model from a pre-trained model configuration.

load_model

< >

( path: typing.Union[str, pathlib.Path] provider: str = 'CPUExecutionProvider' session_options: typing.Optional[onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions] = None provider_options: typing.Union[typing.Dict[str, typing.Any], NoneType] = None )

Parameters

  • path (Union[str, Path]) — Path of the ONNX model.
  • provider (str, defaults to "CPUExecutionProvider") — ONNX Runtime provider to use for loading the model. See https://onnxruntime.ai/docs/execution-providers/ for possible providers.
  • session_options (Optional[onnxruntime.SessionOptions], defaults to None) — ONNX Runtime session options to use for loading the model.
  • provider_options (Optional[Dict[str, Any]], defaults to None) — Provider option dictionary corresponding to the provider used. See available options for each provider: https://onnxruntime.ai/docs/api/c/group___global.html .

Loads an ONNX Inference session with a given provider. Default provider is CPUExecutionProvider to match the default behaviour in PyTorch/TensorFlow/JAX.

raise_on_numpy_input_io_binding

< >

( use_torch: bool )

Parameters

  • use_torch (bool) — Whether the tensor used during inference are of type torch.Tensor or not.

Raises an error if IO Binding is requested although the tensor used are numpy arrays.

shared_attributes_init

< >

( model: InferenceSession use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Initializes attributes that may be shared among several ONNX Runtime inference sesssions.

to

< >

( device: typing.Union[torch.device, str, int] ) ORTModel

Parameters

  • device (torch.device or str or int) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.

Returns

ORTModel

the model placed on the requested device.

Changes the ONNX Runtime provider according to the device.

Natural Language Processing

The following ORT classes are available for the following natural language processing tasks.

ORTModelForCausalLM

class optimum.onnxruntime.ORTModelForCausalLM

< >

( decoder_session: InferenceSession config: PretrainedConfig onnx_paths: typing.List[str] decoder_with_past_session: typing.Optional[onnxruntime.capi.onnxruntime_inference_collection.InferenceSession] = None use_cache: bool = True use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None **kwargs )

ONNX model with a causal language modeling head for ONNX Runtime inference.

can_generate

< >

( )

Returns True to validate the check that the model using GenerationMixin.generate() can indeed generate.

forward

< >

( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None labels: typing.Optional[torch.LongTensor] = None **kwargs )

Parameters

  • input_ids (torch.LongTensor) — Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, sequence_length).
  • attention_mask (torch.LongTensor) — Mask to avoid performing attention on padding token indices, of shape (batch_size, sequence_length). Mask values selected in [0, 1].
  • past_key_values (tuple(tuple(torch.FloatTensor), *optional*, defaults to None) — Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding. The tuple is of length config.n_layers with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head).

The ORTModelForCausalLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCausalLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2")
>>> model = ORTModelForCausalLM.from_pretrained("optimum/gpt2")

>>> inputs = tokenizer("My name is Arthur and I live in", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs,do_sample=True,temperature=0.9, min_length=20,max_length=20)
>>> tokenizer.batch_decode(gen_tokens)

Example using transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/gpt2")
>>> model = ORTModelForCausalLM.from_pretrained("optimum/gpt2")
>>> onnx_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)

>>> text = "My name is Arthur and I live in"
>>> gen = onnx_gen(text)

ORTModelForMaskedLM

class optimum.onnxruntime.ORTModelForMaskedLM

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a MaskedLMOutput for masked language modeling tasks.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Masked language model for ONNX.

forward

< >

( input_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None attention_mask: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None token_type_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None **kwargs )

Parameters

  • input_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForMaskedLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of feature extraction:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForMaskedLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased-for-masked-lm")
>>> model = ORTModelForMaskedLM.from_pretrained("optimum/bert-base-uncased-for-masked-lm")

>>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 8, 28996]

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForMaskedLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-uncased-for-masked-lm")
>>> model = ORTModelForMaskedLM.from_pretrained("optimum/bert-base-uncased-for-masked-lm")
>>> fill_masker = pipeline("fill-mask", model=model, tokenizer=tokenizer)

>>> text = "The capital of France is [MASK]."
>>> pred = fill_masker(text)

ORTModelForSeq2SeqLM

class optimum.onnxruntime.ORTModelForSeq2SeqLM

< >

( encoder_session: InferenceSession decoder_session: InferenceSession config: PretrainedConfig decoder_with_past_session: typing.Optional[onnxruntime.capi.onnxruntime_inference_collection.InferenceSession] = None use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None **kwargs )

Sequence-to-sequence model with a language modeling head for ONNX Runtime inference.

can_generate

< >

( )

Returns True to validate the check that the model using GenerationMixin.generate() can indeed generate.

forward

< >

( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None labels: typing.Optional[torch.LongTensor] = None **kwargs )

Parameters

  • input_ids (torch.LongTensor) — Indices of input sequence tokens in the vocabulary of shape (batch_size, encoder_sequence_length).
  • attention_mask (torch.LongTensor) — Mask to avoid performing attention on padding token indices, of shape (batch_size, encoder_sequence_length). Mask values selected in [0, 1].
  • decoder_input_ids (torch.LongTensor) — Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, decoder_sequence_length).
  • encoder_outputs (torch.FloatTensor) — The encoder last_hidden_state of shape (batch_size, encoder_sequence_length, hidden_size).
  • past_key_values (tuple(tuple(torch.FloatTensor), *optional*, defaults to None) — Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding. The tuple is of length config.n_layers with each tuple having 2 tensors of shape (batch_size, num_heads, decoder_sequence_length, embed_size_per_head) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

The ORTModelForSeq2SeqLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/t5-small")
>>> model = ORTModelForSeq2SeqLM.from_pretrained("optimum/t5-small")

>>> inputs = tokenizer("My name is Eustache and I like to", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs)
>>> outputs = tokenizer.batch_decode(gen_tokens)

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/t5-small")
>>> model = ORTModelForSeq2SeqLM.from_pretrained("optimum/t5-small")
>>> onnx_translation = pipeline("translation_en_to_de", model=model, tokenizer=tokenizer)

>>> text = "My name is Eustache."
>>> pred = onnx_translation(text)

ORTModelForSequenceClassification

class optimum.onnxruntime.ORTModelForSequenceClassification

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Sequence Classification model for ONNX.

forward

< >

( input_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None attention_mask: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None token_type_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None **kwargs )

Parameters

  • input_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForSequenceClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of single-label classification:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 2]

Example using transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> onnx_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)

>>> text = "Hello, my dog is cute"
>>> pred = onnx_classifier(text)

Example using zero-shot-classification transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-mnli")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-mnli")
>>> onnx_z0 = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)

>>> sequence_to_classify = "Who are you voting for in 2020?"
>>> candidate_labels = ["Europe", "public health", "politics", "elections"]
>>> pred = onnx_z0(sequence_to_classify, candidate_labels, multi_label=True)

ORTModelForTokenClassification

class optimum.onnxruntime.ORTModelForTokenClassification

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Token Classification model for ONNX.

forward

< >

( input_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None attention_mask: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None token_type_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None **kwargs )

Parameters

  • input_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForTokenClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of token classification:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForTokenClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER")
>>> model = ORTModelForTokenClassification.from_pretrained("optimum/bert-base-NER")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)
[1, 12, 9]

Example using transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForTokenClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER")
>>> model = ORTModelForTokenClassification.from_pretrained("optimum/bert-base-NER")
>>> onnx_ner = pipeline("token-classification", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> pred = onnx_ner(text)

ORTModelForMultipleChoice

class optimum.onnxruntime.ORTModelForMultipleChoice

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Multiple choice model for ONNX.

forward

< >

( input_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None attention_mask: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None token_type_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None **kwargs )

Parameters

  • input_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForMultipleChoice forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of mutliple choice:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForMultipleChoice

>>> tokenizer = AutoTokenizer.from_pretrained("ehdwns1516/bert-base-uncased_SWAG")
>>> model = ORTModelForMultipleChoice.from_pretrained("ehdwns1516/bert-base-uncased_SWAG", from_transformers=True)

>>> num_choices = 4
>>> first_sentence = ["Members of the procession walk down the street holding small horn brass instruments."] * num_choices
>>> second_sentence = [
...     "A drum line passes by walking down the street playing their instruments.",
...     "A drum line has heard approaching them.",
...     "A drum line arrives and they're outside dancing and asleep.",
...     "A drum line turns the lead singer watches the performance."
... ]
>>> inputs = tokenizer(first_sentence, second_sentence, truncation=True, padding=True)

# Unflatten the inputs values expanding it to the shape [batch_size, num_choices, seq_length]
>>> for k, v in inputs.items():
...     inputs[k] = [v[i: i + num_choices] for i in range(0, len(v), num_choices)]
>>> inputs = dict(inputs.convert_to_tensors(tensor_type="pt"))
>>> outputs = model(**inputs)
>>> logits = outputs.logits

Computer vision

The following ORT classes are available for the following computer vision tasks.

ORTModelForQuestionAnswering

class optimum.onnxruntime.ORTModelForQuestionAnswering

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a QuestionAnsweringModelOutput for extractive question-answering tasks like SQuAD.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Question Answering model for ONNX.

forward

< >

( input_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None attention_mask: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None token_type_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None **kwargs )

Parameters

  • input_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForQuestionAnswering forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of question answering:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="np")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")
>>> onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> pred = onnx_qa(question, text)

ORTModelForImageClassification

class optimum.onnxruntime.ORTModelForImageClassification

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model for image-classification tasks.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Image Classification model for ONNX.

forward

< >

( pixel_values: typing.Union[torch.Tensor, numpy.ndarray] **kwargs )

Parameters

  • pixel_values (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, num_channels, height, width), defaults to None) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images using AutoFeatureExtractor.

The ORTModelForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of image classification:

>>> import requests
>>> from PIL import Image
>>> from optimum.onnxruntime import ORTModelForImageClassification
>>> from transformers import AutoFeatureExtractor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
>>> model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224")

>>> inputs = preprocessor(images=image, return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits

Example using transformers.pipeline:

>>> import requests
>>> from PIL import Image
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForImageClassification

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
>>> model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224")
>>> onnx_image_classifier = pipeline("image-classification", model=model, feature_extractor=preprocessor)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> pred = onnx_image_classifier(url)

ORTModelForSemanticSegmentation

class optimum.onnxruntime.ORTModelForSemanticSegmentation

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with an all-MLP decode head on top e.g. for ADE20k, CityScapes.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Semantic Segmentation model for ONNX.

forward

< >

( **kwargs )

Parameters

  • pixel_values (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, num_channels, height, width), defaults to None) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images using AutoFeatureExtractor.

The ORTModelForSemanticSegmentation forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of semantic segmentation:

>>> import requests
>>> from PIL import Image
>>> from optimum.onnxruntime import ORTModelForSemanticSegmentation
>>> from transformers import AutoFeatureExtractor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
>>> model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")

>>> inputs = preprocessor(images=image, return_tensors="np")

>>> outputs = model(**inputs)
>>> logits = outputs.logits

Example using transformers.pipeline:

>>> import requests
>>> from PIL import Image
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForSemanticSegmentation

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
>>> model = ORTModelForSemanticSegmentation.from_pretrained("optimum/segformer-b0-finetuned-ade-512-512")
>>> onnx_image_segmenter = pipeline("image-segmentation", model=model, feature_extractor=preprocessor)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> pred = onnx_image_segmenter(url)

Audio

The following ORT classes are available for the following audio tasks.

ORTModelForAudioClassification

class optimum.onnxruntime.ORTModelForAudioClassification

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Audio Classification model for ONNX.

forward

< >

( input_values: typing.Optional[torch.Tensor] = None attenton_mask: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_values (torch.Tensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform.. Input values can be obtained from audio file loaded into an array using AutoFeatureExtractor.

The ORTModelForAudioClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of audio classification:

>>> from transformers import AutoFeatureExtractor
>>> from optimum.onnxruntime import ORTModelForAudioClassification
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/hubert-base-superb-ks")
>>> model = ORTModelForAudioClassification.from_pretrained("optimum/hubert-base-superb-ks")

>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")

>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> predicted_class_ids = torch.argmax(logits, dim=-1).item()
>>> predicted_label = model.config.id2label[predicted_class_ids]

Example using transformers.pipeline:

>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForAudioClassification

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/hubert-base-superb-ks")
>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")

>>> model = ORTModelForAudioClassification.from_pretrained("optimum/hubert-base-superb-ks")
>>> onnx_ac = pipeline("audio-classification", model=model, feature_extractor=feature_extractor)

>>> pred = onnx_ac(dataset[0]["audio"]["array"])

ORTModelForAudioFrameClassification

class optimum.onnxruntime.ORTModelForAudioFrameClassification

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model for with a frame classification head on top for tasks like Speaker Diarization.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Audio Frame Classification model for ONNX.

forward

< >

( input_values: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_values (torch.Tensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform.. Input values can be obtained from audio file loaded into an array using AutoFeatureExtractor.

The ORTModelForAudioFrameClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of audio frame classification:

>>> from transformers import AutoFeatureExtractor
>>> from optimum.onnxruntime import ORTModelForAudioFrameClassification
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/wav2vec2-base-superb-sd")
>>> model =  ORTModelForAudioFrameClassification.from_pretrained("optimum/wav2vec2-base-superb-sd")

>>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate)
>>> with torch.no_grad():
...     logits = model(**inputs).logits

>>> probabilities = torch.sigmoid(logits[0])
>>> labels = (probabilities > 0.5).long()
>>> labels[0].tolist()

ORTModelForCTC

class optimum.onnxruntime.ORTModelForCTC

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a language modeling head on top for Connectionist Temporal Classification (CTC).

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

CTC model for ONNX.

forward

< >

( input_values: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_values (torch.Tensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform.. Input values can be obtained from audio file loaded into an array using AutoFeatureExtractor.

The ORTModelForCTC forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of CTC:

>>> from transformers import AutoProcessor, HubertForCTC
>>> from optimum.onnxruntime import ORTModelForCTC
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> processor = AutoProcessor.from_pretrained("optimum/hubert-large-ls960-ft")
>>> model = ORTModelForCTC.from_pretrained("optimum/hubert-large-ls960-ft")

>>> # audio file is decoded on the fly
>>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
>>> with torch.no_grad():
...     logits = model(**inputs).logits
>>> predicted_ids = torch.argmax(logits, dim=-1)

>>> transcription = processor.batch_decode(predicted_ids)

ORTModelForSpeechSeq2Seq

class optimum.onnxruntime.ORTModelForSpeechSeq2Seq

< >

( encoder_session: InferenceSession decoder_session: InferenceSession config: PretrainedConfig decoder_with_past_session: typing.Optional[onnxruntime.capi.onnxruntime_inference_collection.InferenceSession] = None use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None **kwargs )

Speech Sequence-to-sequence model with a language modeling head for ONNX Runtime inference.

can_generate

< >

( )

Returns True to validate the check that the model using GenerationMixin.generate() can indeed generate.

forward

< >

( input_features: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None labels: typing.Optional[torch.LongTensor] = None **kwargs )

Parameters

  • input_features (torch.FloatTensor) — Mel features extracted from the raw speech waveform. (batch_size, feature_size, encoder_sequence_length).
  • decoder_input_ids (torch.LongTensor) — Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, decoder_sequence_length).
  • encoder_outputs (torch.FloatTensor) — The encoder last_hidden_state of shape (batch_size, encoder_sequence_length, hidden_size).
  • past_key_values (tuple(tuple(torch.FloatTensor), *optional*, defaults to None) — Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding. The tuple is of length config.n_layers with each tuple having 2 tensors of shape (batch_size, num_heads, decoder_sequence_length, embed_size_per_head) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

The ORTModelForSpeechSeq2Seq forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoProcessor
>>> from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("optimum/whisper-tiny.en")
>>> model = ORTModelForSpeechSeq2Seq.from_pretrained("optimum/whisper-tiny.en")

>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> inputs = processor.feature_extractor(ds[0]["audio"]["array"], return_tensors="pt")

>>> gen_tokens = model.generate(inputs=inputs.input_features)
>>> outputs = processor.tokenizer.batch_decode(gen_tokens)

Example using transformers.pipeline:

>>> from transformers import AutoProcessor, pipeline
>>> from optimum.onnxruntime import ORTModelForSpeechSeq2Seq
>>> from datasets import load_dataset

>>> processor = AutoProcessor.from_pretrained("optimum/whisper-tiny.en")
>>> model = ORTModelForSpeechSeq2Seq.from_pretrained("optimum/whisper-tiny.en")
>>> speech_recognition = pipeline("automatic-speech-recognition", model=model, tokenizer=processor.tokenizer, feature_extractor=processor.feature_extractor)

>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> pred = speech_recognition(ds[0]["audio"]["array"])

ORTModelForAudioXVector

class optimum.onnxruntime.ORTModelForAudioXVector

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with an XVector feature extraction head on top for tasks like Speaker Verification.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Audio XVector model for ONNX.

forward

< >

( input_values: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_values (torch.Tensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform.. Input values can be obtained from audio file loaded into an array using AutoFeatureExtractor.

The ORTModelForAudioXVector forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of Audio XVector:

>>> from transformers import AutoFeatureExtractor
>>> from optimum.onnxruntime import ORTModelForAudioXVector
>>> from datasets import load_dataset
>>> import torch

>>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
>>> dataset = dataset.sort("id")
>>> sampling_rate = dataset.features["audio"].sampling_rate

>>> feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/wav2vec2-base-superb-sv")
>>> model = ORTModelForAudioXVector.from_pretrained("optimum/wav2vec2-base-superb-sv")

>>> # audio file is decoded on the fly
>>> inputs = feature_extractor(
...     [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True
... )
>>> with torch.no_grad():
...     embeddings = model(**inputs).embeddings

>>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu()

>>> cosine_sim = torch.nn.CosineSimilarity(dim=-1)
>>> similarity = cosine_sim(embeddings[0], embeddings[1])
>>> threshold = 0.7
>>> if similarity < threshold:
...     print("Speakers are not the same!")
>>> round(similarity.item(), 2)

Multimodal

The following ORT classes are available for the following multimodal tasks.

ORTModelForVision2Seq

class optimum.onnxruntime.ORTModelForVision2Seq

< >

( encoder_session: InferenceSession decoder_session: InferenceSession config: PretrainedConfig decoder_with_past_session: typing.Optional[onnxruntime.capi.onnxruntime_inference_collection.InferenceSession] = None use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None **kwargs )

VisionEncoderDecoder Sequence-to-sequence model with a language modeling head for ONNX Runtime inference.

can_generate

< >

( )

Returns True to validate the check that the model using GenerationMixin.generate() can indeed generate.

forward

< >

( pixel_values: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None labels: typing.Optional[torch.LongTensor] = None **kwargs )

Parameters

  • pixel_values (torch.FloatTensor) — Features extracted from an Image. This tensor should be of shape (batch_size, num_channels, height, width).
  • decoder_input_ids (torch.LongTensor) — Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, decoder_sequence_length).
  • encoder_outputs (torch.FloatTensor) — The encoder last_hidden_state of shape (batch_size, encoder_sequence_length, hidden_size).
  • past_key_values (tuple(tuple(torch.FloatTensor), *optional*, defaults to None) — Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding. The tuple is of length config.n_layers with each tuple having 2 tensors of shape (batch_size, num_heads, decoder_sequence_length, embed_size_per_head) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

The ORTModelForVision2Seq forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoImageProcessor, AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForVision2Seq
>>> from PIL import Image
>>> import requests


>>> processor = AutoImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> model = ORTModelForVision2Seq.from_pretrained("nlpconnect/vit-gpt2-image-captioning", from_transformers=True)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)
>>> inputs = processor(image, return_tensors="pt")

>>> gen_tokens = model.generate(**inputs)
>>> outputs = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)

Example using transformers.pipeline:

>>> from transformers import AutoImageProcessor, AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForVision2Seq
>>> from PIL import Image
>>> import requests


>>> processor = AutoImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning")
>>> model = ORTModelForVision2Seq.from_pretrained("nlpconnect/vit-gpt2-image-captioning", from_transformers=True)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> image_to_text = pipeline("image-to-text", model=model, tokenizer=tokenizer, feature_extractor=processor, image_processor=processor)
>>> pred = image_to_text(image)

Custom Tasks

The following ORT classes are available for the following custom tasks.

ORTModelForCustomTasks

class optimum.onnxruntime.ORTModelForCustomTasks

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

ONNX Model for any custom tasks. It can be used to leverage the inference acceleration for any single-file ONNX model.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Model for any custom tasks if the ONNX model is stored in a single file.

forward

< >

( **kwargs )

The ORTModelForCustomTasks forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of custom tasks(e.g. a sentence transformers taking pooler_output as output):

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCustomTasks

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")
>>> model = ORTModelForCustomTasks.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")

>>> inputs = tokenizer("I love burritos!", return_tensors="np")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> pooler_output = outputs.pooler_output

Example using transformers.pipelines(only if the task is supported):

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForCustomTasks

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")
>>> model = ORTModelForCustomTasks.from_pretrained("optimum/sbert-all-MiniLM-L6-with-pooler")
>>> onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)

>>> text = "I love burritos!"
>>> pred = onnx_extractor(text)

ORTModelForFeatureExtraction

class optimum.onnxruntime.ORTModelForFeatureExtraction

< >

( model: InferenceSession config: PretrainedConfig use_io_binding: typing.Optional[bool] = None model_save_dir: typing.Union[str, pathlib.Path, tempfile.TemporaryDirectory, NoneType] = None preprocessors: typing.Optional[typing.List] = None **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (Optional[bool], defaults to None) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a BaseModelOutput for feature-extraction tasks.

This model inherits from ORTModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Feature Extraction model for ONNX.

forward

< >

( input_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None attention_mask: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None token_type_ids: typing.Union[torch.Tensor, numpy.ndarray, NoneType] = None **kwargs )

Parameters

  • input_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (Union[torch.Tensor, np.ndarray, None] of shape (batch_size, sequence_length), defaults to None) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForFeatureExtraction forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of feature extraction:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForFeatureExtraction
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="np")

>>> outputs = model(**inputs)
>>> last_hidden_state = outputs.last_hidden_state
>>> list(last_hidden_state.shape)
[1, 12, 384]

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForFeatureExtraction

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> pred = onnx_extractor(text)