transformers documentation

Vision Encoder Decoder Models

Vision Encoder Decoder Models

The VisionEncoderDecoderModel can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder (e.g. ViT, BEiT, DeiT) and any pretrained language model as the decoder (e.g. RoBERTa, GPT2, BERT).

The effectiveness of initializing image-to-text-sequence models with pretrained checkpoints has been shown in (for example) TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models by Minghao Li, Tengchao Lv, Lei Cui, Yijuan Lu, Dinei Florencio, Cha Zhang, Zhoujun Li, Furu Wei.

An example of how to use a VisionEncoderDecoderModel for inference can be seen in TrOCR.

VisionEncoderDecoderConfig

class transformers.VisionEncoderDecoderConfig < > expand 

( **kwargs )

VisionEncoderDecoderConfig is the configuration class to store the configuration of a VisionEncoderDecoderModel. It is used to instantiate a Vision-Encoder-Text-Decoder model according to the specified arguments, defining the encoder and decoder configs.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Examples:

>>> from transformers import BertConfig, ViTConfig, VisionEncoderDecoderConfig, VisionEncoderDecoderModel

>>> # Initializing a ViT & BERT style configuration
>>> config_encoder = ViTConfig()
>>> config_decoder = BertConfig()

>>> config = VisionEncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder)

>>> # Initializing a ViTBert model from a ViT & bert-base-uncased style configurations
>>> model = VisionEncoderDecoderModel(config=config)

>>> # Accessing the model configuration
>>> config_encoder = model.config.encoder
>>> config_decoder  = model.config.decoder
>>> # set decoder config to causal lm
>>> config_decoder.is_decoder = True
>>> config_decoder.add_cross_attention = True

>>> # Saving the model, including its configuration
>>> model.save_pretrained('my-model')

>>> # loading model and config from pretrained folder
>>> encoder_decoder_config = VisionEncoderDecoderConfig.from_pretrained('my-model')
>>> model = VisionEncoderDecoderModel.from_pretrained('my-model', config=encoder_decoder_config)
from_encoder_decoder_configs < > expand 

( encoder_config: PretrainedConfig decoder_config: PretrainedConfig **kwargs ) VisionEncoderDecoderConfig

Instantiate a VisionEncoderDecoderConfig (or a derived class) from a pre-trained encoder model configuration and decoder model configuration.

to_dict < > expand 

( ) Dict[str, any]

Serializes this instance to a Python dictionary. Override the default to_dict() from PretrainedConfig.

VisionEncoderDecoderModel

class transformers.VisionEncoderDecoderModel < > expand 

( config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None decoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None )

This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.

The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.

Additionally, in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.

After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

VisionEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one of the base vision model classes of the library as encoder and another one as decoder when created with the :meth~transformers.AutoModel.from_pretrained class method for the encoder and :meth~transformers.AutoModelForCausalLM.from_pretrained class method for the decoder.

forward < > expand 

( pixel_values = None decoder_input_ids = None decoder_attention_mask = None encoder_outputs = None past_key_values = None decoder_inputs_embeds = None labels = None use_cache = None output_attentions = None output_hidden_states = None return_dict = None **kwargs ) Seq2SeqLMOutput or tuple(torch.FloatTensor)

The VisionEncoderDecoderModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import TrOCRProcessor, VisionEncoderDecoderModel
>>> import requests
>>> from PIL import Image
>>> import torch

>>> processor = TrOCRProcessor.from_pretrained('microsoft/trocr-base-handwritten')
>>> model = VisionEncoderDecoderModel.from_pretrained('microsoft/trocr-base-handwritten')

>>> # load image from the IAM dataset
>>> url = "https://fki.tic.heia-fr.ch/static/img/a01-122-02.jpg"
>>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB")

>>> # training
>>> model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
>>> model.config.pad_token_id = processor.tokenizer.pad_token_id
>>> model.config.vocab_size = model.config.decoder.vocab_size

>>> pixel_values = processor(image, return_tensors="pt").pixel_values
>>> text = "hello world"
>>> labels = processor.tokenizer(text, return_tensors="pt").input_ids
>>> outputs = model(pixel_values=pixel_values, labels=labels)
>>> loss = outputs.loss

>>> # inference (generation)
>>> generated_ids = model.generate(pixel_values)
>>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0]
from_encoder_decoder_pretrained < > expand 

( encoder_pretrained_model_name_or_path: str = None decoder_pretrained_model_name_or_path: str = None *model_args **kwargs )

Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.

The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with model.train().

Params: encoderpretrained_model_name_or_path (:obj: _str, optional): Information necessary to initiate the image encoder. Can be either:

  • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. An example is google/vit-base-patch16-224-in21k.
  • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
  • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.

decoderpretrained_model_name_or_path (:obj: _str, optional, defaults to None): Information necessary to initiate the text decoder. Can be either:

  • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
  • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.
  • A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.

modelargs (remaining positional arguments, _optional): All remaning positional arguments will be passed to the underlying model’s __init__ method.

kwargs (remaining dictionary of keyword arguments, optional): Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True).

  • To update the encoder configuration, use the prefix _encoder__ for each configuration parameter.
  • To update the decoder configuration, use the prefix _decoder__ for each configuration parameter.
  • To update the parent model configuration, do not use a prefix for each configuration parameter.

Behaves differently depending on whether a config is provided or automatically loaded.

Example:

>>> from transformers import VisionEncoderDecoderModel
>>> # initialize a vit-bert from a pretrained ViT and a pretrained BERT model. Note that the cross-attention layers will be randomly initialized
>>> model = VisionEncoderDecoderModel.from_encoder_decoder_pretrained('google/vit-base-patch16-224-in21k', 'bert-base-uncased')
>>> # saving model after fine-tuning
>>> model.save_pretrained("./vit-bert")
>>> # load fine-tuned model
>>> model = VisionEncoderDecoderModel.from_pretrained("./vit-bert")

FlaxVisionEncoderDecoderModel

class transformers.FlaxVisionEncoderDecoderModel < > expand 

( config: VisionEncoderDecoderConfig input_shape: typing.Optional[typing.Tuple] = None seed: int = 0 dtype: dtype = <class 'jax._src.numpy.lax_numpy.float32'> **kwargs )

This class can be used to initialize an image-to-text-sequence model with any pretrained vision autoencoding model as the encoder and any pretrained text autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like image captioning.

The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu.

Additionally, in TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models it is shown how leveraging large pretrained vision models for optical character recognition (OCR) yields a significant performance improvement.

After such a Vision-Encoder-Text-Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information).

This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior.

FlaxVisionEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base vision model classes of the library as encoder module and another one as decoder module when created with the :meth~transformers.FlaxAutoModel.from_pretrained class method for the encoder and :meth~transformers.FlaxAutoModelForCausalLM.from_pretrained class method for the decoder.

__call__ < > expand 

( pixel_values: ndarray decoder_input_ids: typing.Optional[jax._src.numpy.lax_numpy.ndarray] = None decoder_attention_mask: typing.Optional[jax._src.numpy.lax_numpy.ndarray] = None decoder_position_ids: typing.Optional[jax._src.numpy.lax_numpy.ndarray] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor)

The FlaxVisionEncoderDecoderModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

>>> from transformers import FlaxVisionEncoderDecoderModel, ViTFeatureExtractor, GPT2Tokenizer
>>> from PIL import Image
>>> import requests

>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
>>> image = Image.open(requests.get(url, stream=True).raw)

>>> feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k')

>>> # load output tokenizer
>>> tokenizer_output = GPT2Tokenizer.from_pretrained('gpt2')

>>> # initialize a vit-gpt2 from pretrained ViT and GPT2 models. Note that the cross-attention layers will be randomly initialized
>>> model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained('vit', 'gpt2')

>>> pixel_values = feature_extractor(images=image, return_tensors="np").pixel_values

>>> # use GPT2's eos_token as the pad as well as eos token
>>> model.config.eos_token_id = model.config.decoder.eos_token_id
>>> model.config.pad_token_id = model.config.eos_token_id

>>> # generation
>>> sequences = model.generate(pixel_values, num_beams=4, max_length=12).sequences

>>> captions = tokenizer_output.batch_decode(sequences, skip_special_tokens=True)
from_encoder_decoder_pretrained < > expand 

( encoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None decoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None *model_args **kwargs )

Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints.

Params: encoderpretrained_model_name_or_path (:obj: _Union[str, os.PathLike], optional): Information necessary to initiate the encoder. Can be either:

  • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. An example is google/vit-base-patch16-224-in21k.
  • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.

decoderpretrained_model_name_or_path (:obj: _Union[str, os.PathLike], optional, defaults to None): Information necessary to initiate the decoder. Can be either:

  • A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.
  • A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/.

modelargs (remaining positional arguments, _optional): All remaning positional arguments will be passed to the underlying model’s __init__ method.

kwargs (remaining dictionary of keyword arguments, optional): Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True).

  • To update the encoder configuration, use the prefix _encoder__ for each configuration parameter.
  • To update the decoder configuration, use the prefix _decoder__ for each configuration parameter.
  • To update the parent model configuration, do not use a prefix for each configuration parameter.

Behaves differently depending on whether a config is provided or automatically loaded.

Example:

>>> from transformers import FlaxVisionEncoderDecoderModel
>>> # initialize a vit-gpt2 from a pretrained ViT and a pretrained GPT2 model. Note that the cross-attention layers will be randomly initialized
>>> model = FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained('google/vit-base-patch16-224-in21k', 'gpt2')
>>> # saving model after fine-tuning
>>> model.save_pretrained("./vit-gpt2")
>>> # load fine-tuned model
>>> model = FlaxVisionEncoderDecoderModel.from_pretrained("./vit-gpt2")