( model = None config = None **kwargs )
( input_ids: Tensor attention_mask: Tensor token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )
Parameters
torch.Tensor
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
What are input IDs?
torch.Tensor
), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:torch.Tensor
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The OVModelForFeatureExtraction
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of feature extraction using transformers.pipelines
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForFeatureExtraction
>>> tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")
>>> model = OVModelForFeatureExtraction.from_pretrained("sentence-transformers/all-MiniLM-L6-v2", from_transformers=True)
>>> pipe = pipeline("feature-extraction", model=model, tokenizer=tokenizer)
>>> outputs = pipe("My Name is Peter and I live in New York.")
( model = None config = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the ~intel.openvino.modeling.OVBaseModel.from_pretrained
method to load the model weights.
openvino.runtime.Model
) — is the main class used to run OpenVINO Runtime inference.
OpenVINO Model with a MaskedLMOutput for masked language modeling tasks.
This model inherits from optimum.intel.openvino.modeling.OVBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
( input_ids: Tensor attention_mask: Tensor token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )
Parameters
torch.Tensor
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
What are input IDs?
torch.Tensor
), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:torch.Tensor
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The OVModelForMaskedLM
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of masked language modeling using transformers.pipelines
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForMaskedLM
>>> tokenizer = AutoTokenizer.from_pretrained("roberta-base")
>>> model = OVModelForMaskedLM.from_pretrained("roberta-base", from_transformers=True)
>>> mask_token = tokenizer.mask_token
>>> pipe = pipeline("fill-mask", model=model, tokenizer=tokenizer)
>>> outputs = pipe("The goal of life is" + mask_token)
( model = None config = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the ~intel.openvino.modeling.OVBaseModel.from_pretrained
method to load the model weights.
openvino.runtime.Model
) — is the main class used to run OpenVINO Runtime inference.
OpenVINO Model with a QuestionAnsweringModelOutput for extractive question-answering tasks.
This model inherits from optimum.intel.openvino.modeling.OVBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
( input_ids: Tensor attention_mask: Tensor token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )
Parameters
torch.Tensor
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
What are input IDs?
torch.Tensor
), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:torch.Tensor
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The OVModelForQuestionAnswering
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of question answering using transformers.pipeline
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForQuestionAnswering
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased-distilled-squad")
>>> model = OVModelForQuestionAnswering.from_pretrained("distilbert-base-cased-distilled-squad", from_transformers=True)
>>> pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> outputs = pipe(question, text)
( model = None config = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the ~intel.openvino.modeling.OVBaseModel.from_pretrained
method to load the model weights.
openvino.runtime.Model
) — is the main class used to run OpenVINO Runtime inference.
OpenVINO Model with a SequenceClassifierOutput for sequence classification tasks.
This model inherits from optimum.intel.openvino.modeling.OVBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
( input_ids: Tensor attention_mask: Tensor token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )
Parameters
torch.Tensor
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
What are input IDs?
torch.Tensor
), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:torch.Tensor
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The OVModelForSequenceClassification
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of sequence classification using transformers.pipeline
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForSequenceClassification
>>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english")
>>> model = OVModelForSequenceClassification.from_pretrained("distilbert-base-uncased-finetuned-sst-2-english", from_transformers=True)
>>> pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
>>> outputs = pipe("Hello, my dog is cute")
( model = None config = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the ~intel.openvino.modeling.OVBaseModel.from_pretrained
method to load the model weights.
openvino.runtime.Model
) — is the main class used to run OpenVINO Runtime inference.
OpenVINO Model with a TokenClassifierOutput for token classification tasks.
This model inherits from optimum.intel.openvino.modeling.OVBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
( input_ids: Tensor attention_mask: Tensor token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )
Parameters
torch.Tensor
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
What are input IDs?
torch.Tensor
), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:torch.Tensor
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The OVModelForTokenClassification
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of token classification using transformers.pipelines
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForTokenClassification
>>> tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
>>> model = OVModelForTokenClassification.from_pretrained("dslim/bert-base-NER", from_transformers=True)
>>> pipe = pipeline("token-classification", model=model, tokenizer=tokenizer)
>>> outputs = pipe("My Name is Peter and I live in New York.")
( model = None config = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the ~intel.openvino.modeling.OVBaseModel.from_pretrained
method to load the model weights.
openvino.runtime.Model
) — is the main class used to run OpenVINO Runtime inference.
OpenVINO Model with a ImageClassifierOutput for image classification tasks.
This model inherits from optimum.intel.openvino.modeling.OVBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
( pixel_values: Tensor **kwargs )
Parameters
torch.Tensor
) —
Pixel values corresponding to the images in the current batch.
Pixel values can be obtained from encoded images using AutoFeatureExtractor
.
The OVModelForImageClassification
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of image classification using transformers.pipelines
:
>>> from transformers import AutoFeatureExtractor, pipeline
>>> from optimum.intel.openvino import OVModelForTokenClassification
>>> preprocessor = AutoFeatureExtractor.from_pretrained("google/vit-base-patch16-224")
>>> model = OVModelForTokenClassification.from_pretrained("google/vit-base-patch16-224", from_transformers=True)
>>> model.reshape(batch_size=1, sequence_length=3, height=224, width=224)
>>> pipe = pipeline("image-classification", model=model, feature_extractor=preprocessor)
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
>>> outputs = pipe(url)
( model = None config = None **kwargs )
Parameters
transformers.PretrainedConfig
) — PretrainedConfig is the Model configuration class with all the parameters of the model.
Initializing with a config file does not load the weights associated with the model, only the
configuration. Check out the ~intel.openvino.modeling.OVBaseModel.from_pretrained
method to load the model weights.
openvino.runtime.Model
) — is the main class used to run OpenVINO Runtime inference.
OpenVINO Model with a causal language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from optimum.intel.openvino.modeling.OVBaseModel
. Check the superclass documentation for the generic methods the
library implements for all its model (such as downloading or saving)
Causal LM model for OpenVINO.
( input_ids: Tensor attention_mask: Tensor token_type_ids: typing.Optional[torch.Tensor] = None **kwargs )
Parameters
torch.Tensor
) —
Indices of input sequence tokens in the vocabulary.
Indices can be obtained using AutoTokenizer
.
What are input IDs?
torch.Tensor
), optional) —
Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]
:torch.Tensor
, optional) —
Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]
:The OVModelForCausalLM
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of text generation:
>>> from transformers import AutoTokenizer
>>> from optimum.intel.openvino import OVModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = OVModelForCausalLM.from_pretrained("gpt2")
>>> inputs = tokenizer("I love this story because", return_tensors="pt")
>>> gen_tokens = model.generate(**inputs, do_sample=True, temperature=0.9, min_length=20, max_length=20)
>>> tokenizer.batch_decode(gen_tokens)
Example using transformers.pipelines
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2")
>>> model = OVModelForCausalLM.from_pretrained("gpt2", from_transformers=True)
>>> gen_pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer)
>>> text = "I love this story because"
>>> gen = gen_pipeline(text)
( encoder: Model decoder: Model decoder_with_past: Model = None config: PretrainedConfig = None **kwargs )
Parameters
openvino.runtime.Model
) —
The OpenVINO Runtime model associated to the encoder.
openvino.runtime.Model
) —
The OpenVINO Runtime model associated to the decoder.
openvino.runtime.Model
) —
The OpenVINO Runtime model associated to the decoder with past key values.
transformers.PretrainedConfig
) —
PretrainedConfig
is an instance of the configuration associated to the model. Initializing with a config file does
not load the weights associated with the model, only the configuration.
Sequence-to-sequence model with a language modeling head for OpenVINO inference.
( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None **kwargs )
Parameters
torch.LongTensor
) —
Indices of input sequence tokens in the vocabulary of shape (batch_size, encoder_sequence_length)
.
torch.LongTensor
) —
Mask to avoid performing attention on padding token indices, of shape
(batch_size, encoder_sequence_length)
. Mask values selected in [0, 1]
.
torch.LongTensor
) —
Indices of decoder input sequence tokens in the vocabulary of shape (batch_size, decoder_sequence_length)
.
torch.FloatTensor
) —
The encoder last_hidden_state
of shape (batch_size, encoder_sequence_length, hidden_size)
.
tuple(tuple(torch.FloatTensor), *optional*)
—
Contains the precomputed key and value hidden states of the attention blocks used to speed up decoding.
The tuple is of length config.n_layers
with each tuple having 2 tensors of shape
(batch_size, num_heads, decoder_sequence_length, embed_size_per_head)
and 2 additional tensors of shape
(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)
.
The OVModelForSeq2SeqLM
forward method, overrides the __call__
special method.
Although the recipe for forward pass needs to be defined within this function, one should call the Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while
the latter silently ignores them.
Example of text generation:
>>> from transformers import AutoTokenizer
>>> from optimum.intel.openvino import OVModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
>>> model = OVModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
>>> text = "He never went out without a book under his arm, and he often came back with two."
>>> inputs = tokenizer(text, return_tensors="pt")
>>> gen_tokens = model.generate(**inputs)
>>> outputs = tokenizer.batch_decode(gen_tokens)
Example using transformers.pipeline
:
>>> from transformers import AutoTokenizer, pipeline
>>> from optimum.intel.openvino import OVModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
>>> model = OVModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-fr")
>>> pipe = pipeline("translation_en_to_fr", model=model, tokenizer=tokenizer)
>>> text = "He never went out without a book under his arm, and he often came back with two."
>>> outputs = pipe(text)
Handle the NNCF quantization process.
( dataset_name: str num_samples: int = 100 dataset_config_name: typing.Optional[str] = None dataset_split: str = 'train' preprocess_function: typing.Optional[typing.Callable] = None preprocess_batch: bool = True use_auth_token: bool = False )
Parameters
str
) —
The dataset repository name on the Hugging Face Hub or path to a local directory containing data files
in generic formats and optionally a dataset script, if it requires some code to read the data files.
int
, defaults to 100) —
The maximum number of samples composing the calibration dataset.
str
, optional) —
The name of the dataset configuration.
str
, defaults to "train"
) —
Which split of the dataset to use to perform the calibration step.
Callable
, optional) —
Processing function to apply to each example after loading dataset.
bool
, defaults to True
) —
Whether the preprocess_function
should be batched.
bool
, defaults to False
) —
Whether to use the token generated when running transformers-cli login
.
Create the calibration datasets.Dataset
to use for the post-training static quantization calibration step.
( calibration_dataset: Dataset save_directory: typing.Union[str, pathlib.Path] quantization_config: OVConfig = None file_name: typing.Optional[str] = None batch_size: int = 8 data_collator: typing.Optional[DataCollator] = None remove_unused_columns: bool = True )
Parameters
datasets.Dataset
) —
The dataset to use for the calibration step.
Union[str, Path]
) —
The directory where the quantized model should be saved.
OVConfig
, optional) —
The configuration containing the parameters related to quantization.
str
, optional) —
The model file name to use when saving the model. Overwrites the default file name "model.onnx"
.
int
, defaults to 8) —
The number of calibration samples to load per batch.
DataCollator
, optional) —
The function to use to form a batch from a list of elements of the calibration dataset.
bool
, defaults to True
) —
Whether or not to remove the columns unused by the model forward method.
Quantize a model given the optimization specifications defined in quantization_config
.