Models

ORTModel

class optimum.onnxruntime.ORTModel

< >

( model: InferenceSession = None config: PretrainedConfig = None use_io_binding: bool = True **kwargs )

Base ORTModel class for implementing models using ONNX Runtime. The ORTModel implements generic methods for interacting with the Hugging Face Hub as well as exporting vanilla transformers models to ONNX using transformers.onnx toolchain. The ORTModel implements additionally generic methods for optimizing and quantizing Onnx models.

from_pretrained

< >

( model_id: typing.Union[str, pathlib.Path] from_transformers: bool = False force_download: bool = False use_auth_token: typing.Optional[str] = None cache_dir: typing.Optional[str] = None subfolder: typing.Optional[str] = '' provider: typing.Optional[str] = 'CPUExecutionProvider' session_options: typing.Optional[onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions] = None provider_options: typing.Optional[typing.Dict] = None *args **kwargs ) ORTModel

Parameters

  • model_id (Union[str, Path]) — Can be either:

    • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
    • A path to a directory containing a model saved using ~OptimizedModel.save_pretrained, e.g., ./my_model_directory/.
  • from_transformers (bool, optional, defaults to False) — Defines whether the provided model_id contains a vanilla Transformers checkpoint.
  • force_download (bool, optional, defaults to True) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.
  • use_auth_token (str, optional, defaults to None) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in ~/.huggingface).
  • cache_dir (str, optional, defaults to None) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (i.e., do not try to download the model).
  • subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo either locally or on huggingface.co, you can specify the folder name here.
  • provider (str, optional) — ONNX Runtime providers to use for loading the model. See https://onnxruntime.ai/docs/execution-providers/ for possible providers. Defaults to CPUExecutionProvider.
  • session_options (onnxruntime.SessionOptions, optional), — ONNX Runtime session options to use for loading the model. Defaults to None.
  • provider_options (Dict, optional) — Provider option dictionaries corresponding to the provider used. See available options for each provider: https://onnxruntime.ai/docs/api/c/group___global.html . Defaults to None.

Returns

ORTModel

The loaded ORTModel model.

Instantiate a pretrained model from a pre-trained model configuration.

load_model

< >

( path: typing.Union[str, pathlib.Path] provider: typing.Optional[str] = 'CPUExecutionProvider' session_options: typing.Optional[onnxruntime.capi.onnxruntime_pybind11_state.SessionOptions] = None provider_options: typing.Optional[typing.Dict] = None **kwargs )

Parameters

  • path (str or Path) — Path of the ONNX model.
  • provider (str, optional) — ONNX Runtime provider to use for loading the model. See https://onnxruntime.ai/docs/execution-providers/ for possible providers. Defaults to CPUExecutionProvider.
  • session_options (onnxruntime.SessionOptions, optional) — ONNX Runtime session options to use for loading the model. Defaults to None.
  • provider_options (Dict, optional) — Provider option dictionary corresponding to the provider used. See available options for each provider: https://onnxruntime.ai/docs/api/c/group___global.html . Defaults to None.

Loads an ONNX Inference session with a given provider. Default provider is CPUExecutionProvider to match the default behaviour in PyTorch/TensorFlow/JAX.

to

< >

( device: typing.Union[torch.device, str, int] ) ORTModel

Parameters

  • device (torch.device or str or int) — Device ordinal for CPU/GPU supports. Setting this to -1 will leverage CPU, a positive will run the model on the associated CUDA device id. You can pass native torch.device or a str too.

Returns

ORTModel

the model placed on the requested device.

Changes the ONNX Runtime provider according to the device.

ORTModelForFeatureExtraction

class optimum.onnxruntime.ORTModelForFeatureExtraction

< >

( model = None config = None use_io_binding = True **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (bool, optional) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a MaskedLMOutput for feature-extraction tasks.

This model inherits from [~onnxruntime.modeling_ort.ORTModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Feature Extraction model for ONNX.

forward

< >

( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForFeatureExtraction forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of feature extraction:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForFeatureExtraction
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForFeatureExtraction

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> model = ORTModelForFeatureExtraction.from_pretrained("optimum/all-MiniLM-L6-v2")
>>> onnx_extractor = pipeline("feature-extraction", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> pred = onnx_extractor(text)

prepare_output_buffer

< >

( batch_size sequence_length hidden_size output_name: str )

Prepare the buffer of output_name with a 1D tensor on shape: (batch_size, sequence_length, hidden_size).

ORTModelForQuestionAnswering

class optimum.onnxruntime.ORTModelForQuestionAnswering

< >

( model = None config = None use_io_binding = True **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (bool, optional) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a QuestionAnsweringModelOutput for extractive question-answering tasks like SQuAD.

This model inherits from [~onnxruntime.modeling_ort.ORTModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Question Answering model for ONNX.

forward

< >

( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForQuestionAnswering forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of question answering:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> inputs = tokenizer(question, text, return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> start_scores = outputs.start_logits
>>> end_scores = outputs.end_logits

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForQuestionAnswering

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/roberta-base-squad2")
>>> model = ORTModelForQuestionAnswering.from_pretrained("optimum/roberta-base-squad2")
>>> onnx_qa = pipeline("question-answering", model=model, tokenizer=tokenizer)

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> pred = onnx_qa(question, text)

prepare_logits_buffer

< >

( batch_size sequence_length output_name: str )

Prepare the buffer of logits with a 1D tensor on shape: (batch_size, sequence_length).

ORTModelForSequenceClassification

class optimum.onnxruntime.ORTModelForSequenceClassification

< >

( model = None config = None use_io_binding = True **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (bool, optional) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks.

This model inherits from [~onnxruntime.modeling_ort.ORTModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Sequence Classification model for ONNX.

forward

< >

( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForSequenceClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of single-label classification:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)

Example using transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-finetuned-sst-2-english")
>>> onnx_classifier = pipeline("text-classification", model=model, tokenizer=tokenizer)

>>> text = "Hello, my dog is cute"
>>> pred = onnx_classifier(text)

Example using zero-shot-classification transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSequenceClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/distilbert-base-uncased-mnli")
>>> model = ORTModelForSequenceClassification.from_pretrained("optimum/distilbert-base-uncased-mnli")
>>> onnx_z0 = pipeline("zero-shot-classification", model=model, tokenizer=tokenizer)

>>> sequence_to_classify = "Who are you voting for in 2020?"
>>> candidate_labels = ["Europe", "public health", "politics", "elections"]
>>> pred = onnx_z0(sequence_to_classify, candidate_labels, multi_class=True)

prepare_logits_buffer

< >

( batch_size num_labels )

Prepare the buffer of logits with a 1D tensor on shape: (batch_size, config.num_labels).

ORTModelForTokenClassification

class optimum.onnxruntime.ORTModelForTokenClassification

< >

( model = None config = None use_io_binding = True **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (bool, optional) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks.

This model inherits from [~onnxruntime.modeling_ort.ORTModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Token Classification model for ONNX.

forward

< >

( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForTokenClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of token classification:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForTokenClassification
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER")
>>> model = ORTModelForTokenClassification.from_pretrained("optimum/bert-base-NER")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits
>>> list(logits.shape)

Example using transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForTokenClassification

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/bert-base-NER")
>>> model = ORTModelForTokenClassification.from_pretrained("optimum/bert-base-NER")
>>> onnx_ner = pipeline("token-classification", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> pred = onnx_ner(text)

prepare_logits_buffer

< >

( batch_size sequence_length num_labels )

Prepare the buffer of logits with a 1D tensor on shape: (batch_size, sequence_length, config.num_labels).

ORTModelForCausalLM

class optimum.onnxruntime.ORTModelForCausalLM

< >

( model = None config = None use_io_binding = True **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (bool, optional) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model with a causal language modeling head on top (linear layer with weights tied to the input embeddings).

This model inherits from [~onnxruntime.modeling_ort.ORTModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Causal LM model for ONNX.

forward

< >

( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None **kwargs )

Parameters

  • input_ids (torch.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode and PreTrainedTokenizer.__call__ for details. What are input IDs?
  • attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]:
  • token_type_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]:

The ORTModelForCausalLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForCausalLM
>>> import torch

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = ORTModelForCausalLM.from_pretrained("gpt2")

>>> inputs = tokenizer("My name is Philipp and I live in Germany.", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs,do_sample=True,temperature=0.9, min_length=20,max_length=20)
>>> tokenizer.batch_decode(gen_tokens)

Example using transformers.pipelines:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForCausalLM

>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = ORTModelForCausalLM.from_pretrained("gpt2", from_transformers=True)
>>> onnx_gen = pipeline("text-generation", model=model, tokenizer=tokenizer)

>>> text = "My name is Philipp and I live in Germany."
>>> gen = onnx_gen(text)

prepare_inputs_for_generation

< >

( input_ids: LongTensor **kwargs )

Implement in subclasses of PreTrainedModel for custom behavior to prepare inputs in the generate method.

prepare_logits_buffer

< >

( batch_size sequence_length )

Prepare the buffer of logits with a 1D tensor on shape: (batch_size, sequence_length, config.vocab_size).

ORTModelForSeq2SeqLM

class optimum.onnxruntime.ORTModelForSeq2SeqLM

< >

( *args **kwargs )

Sequence-to-sequence model with a language modeling head for ONNX Runtime inference.

forward

< >

( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None labels: typing.Optional[torch.LongTensor] = None **kwargs )

Parameters

  • input_ids (torch.LongTensor) — Indices of input sequence tokens in the vocabulary of shape (batch_size, encoder_sequence_length).
  • attention_mask (torch.LongTensor) — Mask to avoid performing attention on padding token indices, of shape (batch_size, encoder_sequence_length). Mask values selected in [0, 1].
  • decoder_input_ids (torch.LongTensor) — Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, decoder_sequence_length).
  • encoder_outputs (torch.FloatTensor) — The encoder last_hidden_state of shape (batch_size, encoder_sequence_length, hidden_size).
  • past_key_values (tuple(tuple(torch.FloatTensor), *optional*) — Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding. The tuple is of length config.n_layers with each tuple having 2 tensors of shape (batch_size, num_heads, decoder_sequence_length, embed_size_per_head) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).

The ORTModelForSeq2SeqLM forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of text generation:

>>> from transformers import AutoTokenizer
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/t5-small")
>>> model = ORTModelForSeq2SeqLM.from_pretrained("optimum/t5-small")

>>> inputs = tokenizer("My name is Eustache and I like to", return_tensors="pt")

>>> gen_tokens = model.generate(**inputs)
>>> outputs = tokenizer.batch_decode(gen_tokens)

Example using transformers.pipeline:

>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.onnxruntime import ORTModelForSeq2SeqLM

>>> tokenizer = AutoTokenizer.from_pretrained("optimum/t5-small")
>>> model = ORTModelForSeq2SeqLM.from_pretrained("optimum/t5-small")
>>> onnx_translation = pipeline("translation_en_to_de", model=model, tokenizer=tokenizer)

>>> text = "My name is Eustache."
>>> pred = onnx_translation(text)

ORTModelForImageClassification

class optimum.onnxruntime.ORTModelForImageClassification

< >

( model = None config = None use_io_binding = True **kwargs )

Parameters

  • config (transformers.PretrainedConfig) — PretrainedConfig is the Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.
  • model (onnxruntime.InferenceSession) — onnxruntime.InferenceSession is the main class used to run a model. Check out the load_model() method for more information.
  • use_io_binding (bool, optional) — Whether to use IOBinding during inference to avoid memory copy between the host and devices. Defaults to True if the device is CUDA, otherwise defaults to False.

Onnx Model for image-classification tasks.

This model inherits from [~onnxruntime.modeling_ort.ORTModel]. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving)

Image Classification model for ONNX.

forward

< >

( pixel_values: Tensor **kwargs )

Parameters

  • pixel_values (torch.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values corresponding to the images in the current batch. Pixel values can be obtained from encoded images using AutoFeatureExtractor.

The ORTModelForImageClassification forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Example of image classification:

>>> import requests
>>> from PIL import Image
>>> from optimum.onnxruntime import ORTModelForImageClassification
>>> from transformers import AutoFeatureExtractor

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
>>> model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224")

>>> inputs = preprocessor(images=image, return_tensors="pt")

>>> outputs = model(**inputs)
>>> logits = outputs.logits

Example using transformers.pipeline:

>>> import requests
>>> from PIL import Image
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.onnxruntime import ORTModelForImageClassification

>>> preprocessor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
>>> model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224")
>>> onnx_image_classifier = pipeline("image-classification", model=model, feature_extractor=preprocessor)

>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> pred = onnx_image_classifier(url)

prepare_logits_buffer

< >

( batch_size )

Prepare the buffer of logits with a 1D tensor on shape: (batch_size, config.num_labels).