url
stringlengths
23
7.17k
text
stringlengths
0
1.65M
https://huggingface.co/docs/transformers/model_doc/blip-2
BLIP-2 Overview The BLIP-2 model was proposed in BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models by Junnan Li, Dongxu Li, Silvio Savarese, Steven Hoi. BLIP-2 leverages frozen pre-trained image encoders and large language models (LLMs) by training a lightweight, 12-layer Transformer encoder in between them, achieving state-of-the-art performance on various vision-language tasks. Most notably, BLIP-2 improves upon Flamingo, an 80 billion parameter model, by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. The abstract from the paper is the following: The cost of vision-and-language pre-training has become increasingly prohibitive due to end-to-end training of large-scale models. This paper proposes BLIP-2, a generic and efficient pre-training strategy that bootstraps vision-language pre-training from off-the-shelf frozen pre-trained image encoders and frozen large language models. BLIP-2 bridges the modality gap with a lightweight Querying Transformer, which is pre-trained in two stages. The first stage bootstraps vision-language representation learning from a frozen image encoder. The second stage bootstraps vision-to-language generative learning from a frozen language model. BLIP-2 achieves state-of-the-art performance on various vision-language tasks, despite having significantly fewer trainable parameters than existing methods. For example, our model outperforms Flamingo80B by 8.7% on zero-shot VQAv2 with 54x fewer trainable parameters. We also demonstrate the model’s emerging capabilities of zero-shot image-to-text generation that can follow natural language instructions. Tips: BLIP-2 can be used for conditional text generation given an image and an optional text prompt. At inference time, it’s recommended to use the generate method. One can use Blip2Processor to prepare images for the model, and decode the predicted tokens ID’s back to text. BLIP-2 architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLIP-2. Demo notebooks for BLIP-2 for image captioning, visual question answering (VQA) and chat-like conversations can be found here. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Blip2Config class transformers.Blip2Config < source > ( vision_config = None qformer_config = None text_config = None num_query_tokens = 32 **kwargs ) Parameters vision_config (dict, optional) — Dictionary of configuration options used to initialize Blip2VisionConfig. qformer_config (dict, optional) — Dictionary of configuration options used to initialize Blip2QFormerConfig. text_config (dict, optional) — Dictionary of configuration options used to initialize any PretrainedConfig. num_query_tokens (int, optional, defaults to 32) — The number of query tokens passed through the Transformer. kwargs (optional) — Dictionary of keyword arguments. Blip2Config is the configuration class to store the configuration of a Blip2ForConditionalGeneration. It is used to instantiate a BLIP-2 model according to the specified arguments, defining the vision model, Q-Former model and language model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-2 Salesforce/blip2-opt-2.7b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ( ... Blip2VisionConfig, ... Blip2QFormerConfig, ... OPTConfig, ... Blip2Config, ... Blip2ForConditionalGeneration, ... ) >>> >>> configuration = Blip2Config() >>> >>> model = Blip2ForConditionalGeneration(configuration) >>> >>> configuration = model.config >>> >>> >>> vision_config = Blip2VisionConfig() >>> qformer_config = Blip2QFormerConfig() >>> text_config = OPTConfig() >>> config = Blip2Config.from_text_vision_configs(vision_config, qformer_config, text_config) from_vision_qformer_text_configs < source > ( vision_config: Blip2VisionConfig qformer_config: Blip2QFormerConfig text_config: PretrainedConfig **kwargs ) → Blip2Config An instance of a configuration object Instantiate a Blip2Config (or a derived class) from a BLIP-2 vision model, Q-Former and language model configurations. Blip2VisionConfig class transformers.Blip2VisionConfig < source > ( hidden_size = 1408 intermediate_size = 6144 num_hidden_layers = 39 num_attention_heads = 16 image_size = 224 patch_size = 14 hidden_act = 'gelu' layer_norm_eps = 1e-06 attention_dropout = 0.0 initializer_range = 1e-10 qkv_bias = True **kwargs ) Parameters hidden_size (int, optional, defaults to 1408) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 6144) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 39) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 14) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5): The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries and values in the self-attention layers. This is the configuration class to store the configuration of a Blip2VisionModel. It is used to instantiate a BLIP-2 vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the BLIP-2 Salesforce/blip2-opt-2.7b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import Blip2VisionConfig, Blip2VisionModel >>> >>> configuration = Blip2VisionConfig() >>> >>> model = Blip2VisionModel(configuration) >>> >>> configuration = model.config Blip2QFormerConfig class transformers.Blip2QFormerConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' cross_attention_frequency = 2 encoder_hidden_size = 1408 **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the Q-Former model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling the model. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). cross_attention_frequency (int, optional, defaults to 2) — The frequency of adding cross-attention to the Transformer layers. encoder_hidden_size (int, optional, defaults to 1408) — The hidden size of the hidden states for cross-attention. This is the configuration class to store the configuration of a Blip2QFormerModel. It is used to instantiate a BLIP-2 Querying Transformer (Q-Former) model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-2 Salesforce/blip2-opt-2.7b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Note that Blip2QFormerModel is very similar to BertLMHeadModel with interleaved cross-attention. Examples: >>> from transformers import Blip2QFormerConfig, Blip2QFormerModel >>> >>> configuration = Blip2QFormerConfig() >>> >>> model = Blip2QFormerModel(configuration) >>> >>> configuration = model.config Blip2Processor class transformers.Blip2Processor < source > ( image_processor tokenizer ) Parameters image_processor (BlipImageProcessor) — An instance of BlipImageProcessor. The image processor is a required input. tokenizer (AutoTokenizer) — An instance of [‘PreTrainedTokenizer`]. The tokenizer is a required input. Constructs a BLIP-2 processor which wraps a BLIP image processor and an OPT/T5 tokenizer into a single processor. BlipProcessor offers all the functionalities of BlipImageProcessor and AutoTokenizer. See the docstring of __call__() and decode() for more information. This method forwards all its arguments to PreTrainedTokenizer’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to PreTrainedTokenizer’s decode(). Please refer to the docstring of this method for more information. Blip2VisionModel class transformers.Blip2VisionModel < source > ( config: Blip2VisionConfig ) forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Blip2VisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Blip2QFormerModel class transformers.Blip2QFormerModel < source > ( config: Blip2QFormerConfig ) Querying Transformer (Q-Former), used in BLIP-2. forward < source > ( query_embeds: FloatTensor attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of: shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional): If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Blip2Model class transformers.Blip2Model < source > ( config: Blip2Config ) Parameters config (Blip2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP-2 Model for generating text and image features. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor input_ids: FloatTensor attention_mask: typing.Optional[torch.LongTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None return_dict: typing.Optional[bool] = None ) → transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be provided to serve as text prompt, which the language model can continue. Indices can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an encoder-decoder language model (like T5) is used. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. Only relevant in case an encoder-decoder language model (like T5) is used. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor) A transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig'>) and inputs. loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Language modeling loss from the language model. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model. vision_outputs (BaseModelOutputWithPooling) — Outputs of the vision encoder. qformer_outputs (BaseModelOutputWithPoolingAndCrossAttentions) — Outputs of the Q-Former (Querying Transformer). language_model_outputs (CausalLMOutputWithPast or Seq2SeqLMOutput) — Outputs of the language model. The Blip2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import Blip2Processor, Blip2Model >>> import torch >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> model.to(device) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> prompt = "Question: how many cats are there? Answer:" >>> inputs = processor(images=image, text=prompt, return_tensors="pt").to(device, torch.float16) >>> outputs = model(**inputs) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.Tensor] = None decoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_outputs (CausalLMOutputWithPast, or tuple(torch.FloatTensor) if return_dict=False) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? T5 uses the pad_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). To know more on how to prepare decoder_input_ids for pretraining take a look at T5 Training. decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_outputs (CausalLMOutputWithPast, or tuple(torch.FloatTensor) if return_dict=False) The language model outputs. If return_dict=True, the output is a CausalLMOutputWithPast that contains the language model logits, the past key values and the hidden states if output_hidden_states=True. The Blip2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import torch >>> from transformers import AutoTokenizer, Blip2Model >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> model.to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/blip2-opt-2.7b") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt").to(device) >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor) The vision model outputs. If return_dict=True, the output is a BaseModelOutputWithPooling that contains the image features, the pooled image features and the hidden states if output_hidden_states=True. The Blip2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import torch >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, Blip2Model >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> model.to(device) >>> processor = AutoProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt").to(device, torch.float16) >>> image_outputs = model.get_image_features(**inputs) get_qformer_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be provided to serve as text prompt, which the language model can continue. Indices can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an encoder-decoder language model (like T5) is used. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. Only relevant in case an encoder-decoder language model (like T5) is used. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns vision_outputs (BaseModelOutputWithPooling or tuple of torch.FloatTensor) The vision model outputs. If return_dict=True, the output is a BaseModelOutputWithPooling that contains the image features, the pooled image features and the hidden states if output_hidden_states=True. The Blip2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import torch >>> from PIL import Image >>> import requests >>> from transformers import Blip2Processor, Blip2Model >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2Model.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16) >>> model.to(device) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt").to(device, torch.float16) >>> qformer_outputs = model.get_qformer_features(**inputs) Blip2ForConditionalGeneration class transformers.Blip2ForConditionalGeneration < source > ( config: Blip2Config ) Parameters config (Blip2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP-2 Model for generating text given an image and an optional text prompt. The model consists of a vision encoder, Querying Transformer (Q-Former) and a language model. One can optionally pass input_ids to the model, which serve as a text prompt, to make the language model continue the prompt. Otherwise, the language model starts generating text from the [BOS] (beginning-of-sequence) token. Note that Flan-T5 checkpoints cannot be cast to float16. They are pre-trained using bfloat16. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor input_ids: FloatTensor attention_mask: typing.Optional[torch.LongTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None return_dict: typing.Optional[bool] = None ) → transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of input sequence tokens in the vocabulary of the language model. Input tokens can optionally be provided to serve as text prompt, which the language model can continue. Indices can be obtained using Blip2Processor. See Blip2Processor.__call__() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary of the language model. Only relevant in case an encoder-decoder language model (like T5) is used. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. Only relevant in case an encoder-decoder language model (like T5) is used. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or tuple(torch.FloatTensor) A transformers.models.blip_2.modeling_blip_2.Blip2ForConditionalGenerationModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip_2.configuration_blip_2.Blip2VisionConfig'>) and inputs. loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Language modeling loss from the language model. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head of the language model. vision_outputs (BaseModelOutputWithPooling) — Outputs of the vision encoder. qformer_outputs (BaseModelOutputWithPoolingAndCrossAttentions) — Outputs of the Q-Former (Querying Transformer). language_model_outputs (CausalLMOutputWithPast or Seq2SeqLMOutput) — Outputs of the language model. The Blip2ForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: Image captioning (without providing a text prompt): >>> from PIL import Image >>> import requests >>> from transformers import Blip2Processor, Blip2ForConditionalGeneration >>> import torch >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained( ... "Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16 ... ) >>> model.to(device) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt").to(device, torch.float16) >>> generated_ids = model.generate(**inputs) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) two cats laying on a couch Visual question answering (prompt = question): >>> from PIL import Image >>> import requests >>> from transformers import Blip2Processor, Blip2ForConditionalGeneration >>> import torch >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") >>> model = Blip2ForConditionalGeneration.from_pretrained( ... "Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map={"": 0}, torch_dtype=torch.float16 ... ) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> prompt = "Question: how many cats are there? Answer:" >>> inputs = processor(images=image, text=prompt, return_tensors="pt").to(device="cuda", dtype=torch.float16) >>> generated_ids = model.generate(**inputs) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) two Note that int8 inference is also supported through bitsandbytes. This greatly reduces the amount of memory used by the model while maintaining the same performance. >>> from PIL import Image >>> import requests >>> from transformers import Blip2Processor, Blip2ForConditionalGeneration >>> import torch >>> processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") >>> model = Blip2ForConditionalGeneration.from_pretrained( ... "Salesforce/blip2-flan-t5-xl", load_in_8bit=True, device_map={"": 0}, torch_dtype=torch.bfloat16 ... ) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> prompt = "Question: how many cats are there? Answer:" >>> inputs = processor(images=image, text=prompt, return_tensors="pt").to(device="cuda", dtype=torch.bfloat16) >>> generated_ids = model.generate(**inputs) >>> generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0].strip() >>> print(generated_text) two generate < source > ( pixel_values: FloatTensor input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None **generate_kwargs ) → captions (list) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Input images to be processed. input_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — The sequence used as a prompt for the generation. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices A list of strings of length batch_size * num_captions. Overrides generate function to be able to use the model as a conditional generator.
https://huggingface.co/docs/transformers/model_doc/bit
Big Transfer (BiT) Overview The BiT model was proposed in Big Transfer (BiT): General Visual Representation Learning by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby. BiT is a simple recipe for scaling up pre-training of ResNet-like architectures (specifically, ResNetv2). The method results in significant improvements for transfer learning. The abstract from the paper is the following: Transfer of pre-trained representations improves sample efficiency and simplifies hyperparameter tuning when training deep neural networks for vision. We revisit the paradigm of pre-training on large supervised datasets and fine-tuning the model on a target task. We scale up pre-training, and propose a simple recipe that we call Big Transfer (BiT). By combining a few carefully selected components, and transferring using a simple heuristic, we achieve strong performance on over 20 datasets. BiT performs well across a surprisingly wide range of data regimes — from 1 example per class to 1M total examples. BiT achieves 87.5% top-1 accuracy on ILSVRC-2012, 99.4% on CIFAR-10, and 76.3% on the 19 task Visual Task Adaptation Benchmark (VTAB). On small datasets, BiT attains 76.8% on ILSVRC-2012 with 10 examples per class, and 97.0% on CIFAR-10 with 10 examples per class. We conduct detailed analysis of the main components that lead to high transfer performance. Tips: BiT models are equivalent to ResNetv2 in terms of architecture, except that: 1) all batch normalization layers are replaced by group normalization, 2) weight standardization is used for convolutional layers. The authors show that the combination of both is useful for training with large batch sizes, and has a significant impact on transfer learning. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BiT. Image Classification BitForImageClassification is supported by this example script and notebook. See also: Image classification task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. BitConfig class transformers.BitConfig < source > ( num_channels = 3 embedding_size = 64 hidden_sizes = [256, 512, 1024, 2048] depths = [3, 4, 6, 3] layer_type = 'preactivation' hidden_act = 'relu' global_padding = None num_groups = 32 drop_path_rate = 0.0 embedding_dynamic_padding = False output_stride = 32 width_factor = 1 out_features = None out_indices = None **kwargs ) Parameters num_channels (int, optional, defaults to 3) — The number of input channels. embedding_size (int, optional, defaults to 64) — Dimensionality (hidden size) for the embedding layer. hidden_sizes (List[int], optional, defaults to [256, 512, 1024, 2048]) — Dimensionality (hidden size) at each stage. depths (List[int], optional, defaults to [3, 4, 6, 3]) — Depth (number of layers) for each stage. layer_type (str, optional, defaults to "preactivation") — The layer to use, it can be either "preactivation" or "bottleneck". hidden_act (str, optional, defaults to "relu") — The non-linear activation function in each block. If string, "gelu", "relu", "selu" and "gelu_new" are supported. global_padding (str, optional) — Padding strategy to use for the convolutional layers. Can be either "valid", "same", or None. num_groups (int, optional, defaults to 32) — Number of groups used for the BitGroupNormActivation layers. drop_path_rate (float, optional, defaults to 0.0) — The drop path rate for the stochastic depth. embedding_dynamic_padding (bool, optional, defaults to False) — Whether or not to make use of dynamic padding for the embedding layer. output_stride (int, optional, defaults to 32) — The output stride of the model. width_factor (int, optional, defaults to 1) — The width factor for the model. out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. This is the configuration class to store the configuration of a BitModel. It is used to instantiate an BiT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BiT google/bit-50 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BitConfig, BitModel >>> >>> configuration = BitConfig() >>> >>> model = BitModel(configuration) >>> >>> configuration = model.config BitImageProcessor class transformers.BitImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True crop_size: typing.Dict[str, int] = None do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by do_resize in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}): Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method. do_center_crop (bool, optional, defaults to True) — Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the preprocess method. crop_size (Dict[str, int] optional, defaults to 224) — Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess method. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess method. do_normalize — Whether to normalize the image. Can be overridden by do_normalize in the preprocess method. image_mean (float or List[float], optional, defaults to OPENAI_CLIP_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to OPENAI_CLIP_MEAN) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Can be overridden by the image_std parameter in the preprocess method. do_convert_rgb (bool, optional, defaults to True) — Whether to convert the image to RGB. Constructs a BiT image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None resample: Resampling = None do_center_crop: bool = None crop_size: int = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the center crop. Only has an effect if do_center_crop is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to use for normalization. Only has an effect if do_normalize is set to True. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to use for normalization. Only has an effect if do_normalize is set to True. do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) — Whether to convert the image to RGB. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. BitModel class transformers.BitModel < source > ( config ) Parameters config (BitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BiT model outputting raw features without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: Tensor output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BitImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BitConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. The BitModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, BitModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("google/bit-50") >>> model = BitModel.from_pretrained("google/bit-50") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 2048, 7, 7] BitForImageClassification class transformers.BitForImageClassification < source > ( config ) Parameters config (BitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BiT Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BitImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BitConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. The BitForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, BitForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("google/bit-50") >>> model = BitForImageClassification.from_pretrained("google/bit-50") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tiger cat
https://huggingface.co/docs/transformers/model_doc/bridgetower
BridgeTower Overview The BridgeTower model was proposed in BridgeTower: Building Bridges Between Encoders in Vision-Language Representative Learning by Xiao Xu, Chenfei Wu, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan. The goal of this model is to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder thus achieving remarkable performance on various downstream tasks with almost negligible additional performance and computational costs. This paper has been accepted to the AAAI’23 conference. The abstract from the paper is the following: Vision-Language (VL) models with the TWO-TOWER architecture have dominated visual-language representation learning in recent years. Current VL models either use lightweight uni-modal encoders and learn to extract, align and fuse both modalities simultaneously in a deep cross-modal encoder, or feed the last-layer uni-modal representations from the deep pre-trained uni-modal encoders into the top cross-modal encoder. Both approaches potentially restrict vision-language representation learning and limit model performance. In this paper, we propose BRIDGETOWER, which introduces multiple bridge layers that build a connection between the top layers of uni-modal encoders and each layer of the crossmodal encoder. This enables effective bottom-up cross-modal alignment and fusion between visual and textual representations of different semantic levels of pre-trained uni-modal encoders in the cross-modal encoder. Pre-trained with only 4M images, BRIDGETOWER achieves state-of-the-art performance on various downstream vision-language tasks. In particular, on the VQAv2 test-std set, BRIDGETOWER achieves an accuracy of 78.73%, outperforming the previous state-of-the-art model METER by 1.09% with the same pre-training data and almost negligible additional parameters and computational costs. Notably, when further scaling the model, BRIDGETOWER achieves an accuracy of 81.15%, surpassing models that are pre-trained on orders-of-magnitude larger datasets. BridgeTower architecture. Taken from the original paper. Usage BridgeTower consists of a visual encoder, a textual encoder and cross-modal encoder with multiple lightweight bridge layers. The goal of this approach was to build a bridge between each uni-modal encoder and the cross-modal encoder to enable comprehensive and detailed interaction at each layer of the cross-modal encoder. In principle, one can apply any visual, textual or cross-modal encoder in the proposed architecture. The BridgeTowerProcessor wraps RobertaTokenizer and BridgeTowerImageProcessor into a single instance to both encode the text and prepare the images respectively. The following example shows how to run contrastive learning using BridgeTowerProcessor and BridgeTowerForContrastiveLearning. >>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> >>> scores = dict() >>> for text in texts: ... ... encoding = processor(image, text, return_tensors="pt") ... outputs = model(**encoding) ... scores[text] = outputs The following example shows how to run image-text retrieval using BridgeTowerProcessor and BridgeTowerForImageAndTextRetrieval. >>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> >>> scores = dict() >>> for text in texts: ... ... encoding = processor(image, text, return_tensors="pt") ... outputs = model(**encoding) ... scores[text] = outputs.logits[0, 1].item() The following example shows how to run masked language modeling using BridgeTowerProcessor and BridgeTowerForMaskedLM. >>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000360943.jpg" >>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB") >>> text = "a <mask> looking out of the window" >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> >>> encoding = processor(image, text, return_tensors="pt") >>> >>> outputs = model(**encoding) >>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) >>> print(results) .a cat looking out of the window. This model was contributed by Anahita Bhiwandiwalla, Tiep Le and Shaoyen Tseng. The original code can be found here. Tips: This implementation of BridgeTower uses RobertaTokenizer to generate text embeddings and OpenAI’s CLIP/ViT model to compute visual embeddings. Checkpoints for pre-trained bridgeTower-base and bridgetower masked language modeling and image text matching are released. Please refer to Table 5 for BridgeTower’s performance on Image Retrieval and other down stream tasks. The PyTorch version of this model is only available in torch 1.10 and higher. BridgeTowerConfig class transformers.BridgeTowerConfig < source > ( share_cross_modal_transformer_layers = True hidden_act = 'gelu' hidden_size = 768 initializer_factor = 1 layer_norm_eps = 1e-05 share_link_tower_layers = False link_tower_type = 'add' num_attention_heads = 12 num_hidden_layers = 6 tie_word_embeddings = False init_layernorm_from_vision_encoder = False text_config = None vision_config = None **kwargs ) Parameters share_cross_modal_transformer_layers (bool, optional, defaults to True) — Whether cross modal transformer layers are shared. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). layer_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. share_link_tower_layers (bool, optional, defaults to False) — Whether the bride/link tower layers are shared. link_tower_type (str, optional, defaults to "add") — Type of the bridge/link layer. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 6) — Number of hidden layers in the Transformer encoder. tie_word_embeddings (bool, optional, defaults to False) — Whether to tie input and output embeddings. init_layernorm_from_vision_encoder (bool, optional, defaults to False) — Whether to init LayerNorm from the vision encoder. text_config (dict, optional) — Dictionary of configuration options used to initialize BridgeTowerTextConfig. vision_config (dict, optional) — Dictionary of configuration options used to initialize BridgeTowerVisionConfig. This is the configuration class to store the configuration of a BridgeTowerModel. It is used to instantiate a BridgeTower model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the bridgetower-base BridgeTower/bridgetower-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BridgeTowerModel, BridgeTowerConfig >>> >>> configuration = BridgeTowerConfig() >>> >>> model = BridgeTowerModel(configuration) >>> >>> configuration = model.config from_text_vision_configs < source > ( text_config: BridgeTowerTextConfig vision_config: BridgeTowerVisionConfig **kwargs ) Instantiate a BridgeTowerConfig (or a derived class) from BridgeTower text model configuration. Returns: BridgeTowerConfig: An instance of a configuration object BridgeTowerTextConfig class transformers.BridgeTowerTextConfig < source > ( vocab_size = 50265 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 initializer_factor = 1 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 514 type_vocab_size = 1 layer_norm_eps = 1e-05 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' use_cache = True **kwargs ) Parameters vocab_size (int, optional, defaults to 50265) — Vocabulary size of the text part of the model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BridgeTowerModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 514) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids. initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). layer_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. This is the configuration class to store the text configuration of a BridgeTowerModel. The default values here are copied from RoBERTa. Instantiating a configuration with the defaults will yield a similar configuration to that of the bridgetower-base BridegTower/bridgetower-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BridgeTowerTextConfig >>> >>> configuration = BridgeTowerTextConfig() >>> >>> configuration BridgeTowerVisionConfig class transformers.BridgeTowerVisionConfig < source > ( hidden_size = 768 num_hidden_layers = 12 num_channels = 3 patch_size = 16 image_size = 288 initializer_factor = 1 layer_norm_eps = 1e-05 stop_gradient = False share_layernorm = True remove_last_layer = False **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in visual encoder model. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. image_size (int, optional, defaults to 288) — The size (resolution) of each image. initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). layer_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. stop_gradient (bool, optional, defaults to False) — Whether to stop gradient for training. share_layernorm (bool, optional, defaults to True) — Whether LayerNorm layers are shared. remove_last_layer (bool, optional, defaults to False) — Whether to remove the last layer from the vision encoder. This is the configuration class to store the vision configuration of a BridgeTowerModel. Instantiating a configuration with the defaults will yield a similar configuration to that of the bridgetower-base BridgeTower/bridgetower-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BridgeTowerVisionConfig >>> >>> configuration = BridgeTowerVisionConfig() >>> >>> configuration BridgeTowerImageProcessor class transformers.BridgeTowerImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = 288 size_divisor: int = 32 resample: Resampling = <Resampling.BICUBIC: 3> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_center_crop: bool = True do_pad: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method. size (Dict[str, int] optional, defaults to 288) — Resize the shorter side of the input to size["shortest_edge"]. The longer side will be limited to under int((1333 / 800) * size["shortest_edge"]) while preserving the aspect ratio. Only has an effect if do_resize is set to True. Can be overridden by the size parameter in the preprocess method. size_divisor (int, optional, defaults to 32) — The size by which to make sure both the height and width can be divided. Only has an effect if do_resize is set to True. Can be overridden by the size_divisor parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. Can be overridden by the resample parameter in the preprocess method. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Only has an effect if do_rescale is set to True. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Can be overridden by the image_std parameter in the preprocess method. do_center_crop (bool, optional, defaults to True) — Whether to center crop the image. Can be overridden by the do_center_crop parameter in the preprocess method. do_pad (bool, optional, defaults to True) — Whether to pad the image to the (max_height, max_width) of the images in the batch. Can be overridden by the do_pad parameter in the preprocess method. Constructs a BridgeTower image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None size_divisor: typing.Optional[int] = None resample: Resampling = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_pad: typing.Optional[bool] = None do_center_crop: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Controls the size of the image after resize. The shortest edge of the image is resized to size["shortest_edge"] whilst preserving the aspect ratio. If the longest edge of this resized image is > int(size["shortest_edge"] * (1333 / 800)), then the image is resized again to make the longest edge equal to int(size["shortest_edge"] * (1333 / 800)). size_divisor (int, optional, defaults to self.size_divisor) — The image is resized to a size that is a multiple of this value. resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to normalize the image by if do_normalize is set to True. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to normalize the image by if do_normalize is set to True. do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image to the (max_height, max_width) in the batch. If True, a pixel mask is also created and returned. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. BridgeTowerProcessor class transformers.BridgeTowerProcessor < source > ( image_processor tokenizer ) Parameters image_processor (BridgeTowerImageProcessor) — An instance of BridgeTowerImageProcessor. The image processor is a required input. tokenizer (RobertaTokenizerFast) — An instance of [‘RobertaTokenizerFast`]. The tokenizer is a required input. Constructs a BridgeTower processor which wraps a Roberta tokenizer and BridgeTower image processor into a single processor. BridgeTowerProcessor offers all the functionalities of BridgeTowerImageProcessor and RobertaTokenizerFast. See the docstring of call() and decode() for more information. __call__ < source > ( images text: typing.Union[str, typing.List[str], typing.List[typing.List[str]]] = None add_special_tokens: bool = True padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = False truncation: typing.Union[bool, str, transformers.tokenization_utils_base.TruncationStrategy] = None max_length: typing.Optional[int] = None stride: int = 0 pad_to_multiple_of: typing.Optional[int] = None return_token_type_ids: typing.Optional[bool] = None return_attention_mask: typing.Optional[bool] = None return_overflowing_tokens: bool = False return_special_tokens_mask: bool = False return_offsets_mapping: bool = False return_length: bool = False verbose: bool = True return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None **kwargs ) This method uses BridgeTowerImageProcessor.call() method to prepare image(s) for the model, and RobertaTokenizerFast.call() to prepare text for the model. Please refer to the docstring of the above two methods for more information. BridgeTowerModel class transformers.BridgeTowerModel < source > ( config ) Parameters config (BridgeTowerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BridgeTower Model transformer outputting BridgeTowerModelOutput object without any specific head on top. This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None pixel_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None image_embeds: typing.Optional[torch.FloatTensor] = None image_token_type_idx: typing.Optional[int] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None ) → transformers.models.bridgetower.modeling_bridgetower.BridgeTowerModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See BridgeTowerImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? <../glossary.html#attention-mask>__ head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) — Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert pixel_values into patch embeddings. image_token_type_idx (int, optional) — The token type ids for images. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. output_hidden_states (bool, optional) — If set to True, hidden states are returned as a list containing the hidden states of text, image, and cross-modal components respectively. i.e. (hidden_states_text, hidden_states_image, hidden_states_cross_modal) where each element is a list of the hidden states of the corresponding modality. hidden_states_txt/img are a list of tensors corresponding to unimodal hidden states and hidden_states_cross_modal is a list of tuples containing cross_modal_text_hidden_states and cross_modal_image_hidden_states of each brdige layer. labels (torch.LongTensor of shape (batch_size,), optional) — Labels are currently not supported. Returns transformers.models.bridgetower.modeling_bridgetower.BridgeTowerModelOutput or tuple(torch.FloatTensor) A transformers.models.bridgetower.modeling_bridgetower.BridgeTowerModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BridgeTowerConfig) and inputs. text_features (torch.FloatTensor of shape (batch_size, text_sequence_length, hidden_size)) — Sequence of hidden-states at the text output of the last layer of the model. image_features (torch.FloatTensor of shape (batch_size, image_sequence_length, hidden_size)) — Sequence of hidden-states at the image output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size x 2)) — Concatenation of last layer hidden-state of the first token of the text and image sequence (classification token), respectively, after further processing through layers used for auxiliary pretraining tasks. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BridgeTowerModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import BridgeTowerProcessor, BridgeTowerModel >>> from PIL import Image >>> import requests >>> >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text = "hello world" >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base") >>> model = BridgeTowerModel.from_pretrained("BridgeTower/bridgetower-base") >>> inputs = processor(image, text, return_tensors="pt") >>> outputs = model(**inputs) >>> outputs.keys() odict_keys(['text_features', 'image_features', 'pooler_output']) BridgeTowerForContrastiveLearning class transformers.BridgeTowerForContrastiveLearning < source > ( config ) Parameters config (BridgeTowerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BridgeTower Model with a image-text contrastive head on top computing image-text contrastive loss. This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None pixel_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None image_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = True return_dict: typing.Optional[bool] = None return_loss: typing.Optional[bool] = None ) → transformers.models.bridgetower.modeling_bridgetower.BridgeTowerContrastiveOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See BridgeTowerImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? <../glossary.html#attention-mask>__ head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) — Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert pixel_values into patch embeddings. image_token_type_idx (int, optional) — The token type ids for images. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. return_loss (bool, optional) — Whether or not to return the contrastive loss. Returns transformers.models.bridgetower.modeling_bridgetower.BridgeTowerContrastiveOutput or tuple(torch.FloatTensor) A transformers.models.bridgetower.modeling_bridgetower.BridgeTowerContrastiveOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BridgeTowerConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True — Image-text contrastive loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). text_embeds (torch.FloatTensor), optional, returned when model is initialized with with_projection=True) — The text embeddings obtained by applying the projection layer to the pooler_output. image_embeds (torch.FloatTensor), optional, returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output. cross_embeds (torch.FloatTensor), optional, returned when model is initialized with with_projection=True) — The text-image cross-modal embeddings obtained by applying the projection layer to the pooler_output. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). The BridgeTowerForContrastiveLearning forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import BridgeTowerProcessor, BridgeTowerForContrastiveLearning >>> import requests >>> from PIL import Image >>> import torch >>> image_urls = [ ... "https://farm4.staticflickr.com/3395/3428278415_81c3e27f15_z.jpg", ... "http://images.cocodataset.org/val2017/000000039769.jpg", ... ] >>> texts = ["two dogs in a car", "two cats sleeping on a couch"] >>> images = [Image.open(requests.get(url, stream=True).raw) for url in image_urls] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> model = BridgeTowerForContrastiveLearning.from_pretrained("BridgeTower/bridgetower-large-itm-mlm-itc") >>> inputs = processor(images, texts, padding=True, return_tensors="pt") >>> loss = model(**inputs, return_loss=True).loss >>> inputs = processor(images, texts[::-1], padding=True, return_tensors="pt") >>> loss_swapped = model(**inputs, return_loss=True).loss >>> print("Loss", round(loss.item(), 4)) Loss 0.0019 >>> print("Loss with swapped images", round(loss_swapped.item(), 4)) Loss with swapped images 2.126 BridgeTowerForMaskedLM class transformers.BridgeTowerForMaskedLM < source > ( config ) Parameters config (BridgeTowerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BridgeTower Model with a language modeling head on top as done during pretraining. This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None pixel_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None image_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See BridgeTowerImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? <../glossary.html#attention-mask>__ head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) — Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert pixel_values into patch embeddings. image_token_type_idx (int, optional) — The token type ids for images. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BridgeTowerConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BridgeTowerForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import BridgeTowerProcessor, BridgeTowerForMaskedLM >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000360943.jpg" >>> image = Image.open(requests.get(url, stream=True).raw).convert("RGB") >>> text = "a <mask> looking out of the window" >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForMaskedLM.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> >>> encoding = processor(image, text, return_tensors="pt") >>> >>> outputs = model(**encoding) >>> results = processor.decode(outputs.logits.argmax(dim=-1).squeeze(0).tolist()) >>> print(results) .a cat looking out of the window. BridgeTowerForImageAndTextRetrieval class transformers.BridgeTowerForImageAndTextRetrieval < source > ( config ) Parameters config (BridgeTowerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BridgeTower Model transformer with a classifier head on top (a linear layer on top of the final hidden state of the [CLS] token) for image-to-text matching. This model is a PyTorch torch.nn.Module <https://pytorch.org/docs/stable/nn.html#torch.nn.Module>_ subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None pixel_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None image_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using BridgeTowerImageProcessor. See BridgeTowerImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? <../glossary.html#attention-mask>__ head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. image_embeds (torch.FloatTensor of shape (batch_size, num_patches, hidden_size), optional) — Optionally, instead of passing pixel_values, you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert pixel_values into patch embeddings. image_token_type_idx (int, optional) — The token type ids for images. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, 1), optional) — Labels for computing the image-text matching loss. 0 means the pairs don’t match and 1 means they match. The pairs with 0 will be skipped for calculation. A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BridgeTowerConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BridgeTowerForImageAndTextRetrieval forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import BridgeTowerProcessor, BridgeTowerForImageAndTextRetrieval >>> import requests >>> from PIL import Image >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["An image of two cats chilling on a couch", "A football player scoring a goal"] >>> processor = BridgeTowerProcessor.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> model = BridgeTowerForImageAndTextRetrieval.from_pretrained("BridgeTower/bridgetower-base-itm-mlm") >>> >>> scores = dict() >>> for text in texts: ... ... encoding = processor(image, text, return_tensors="pt") ... outputs = model(**encoding) ... scores[text] = outputs.logits[0, 1].item()
https://huggingface.co/docs/transformers/model_doc/blenderbot-small
Blenderbot Small Note that BlenderbotSmallModel and BlenderbotSmallForConditionalGeneration are only used in combination with the checkpoint facebook/blenderbot-90M. Larger Blenderbot checkpoints should instead be used with BlenderbotModel and BlenderbotForConditionalGeneration Overview The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models. Tips: Blenderbot Small is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. This model was contributed by patrickvonplaten. The authors’ code can be found here. Documentation resources Causal language modeling task guide Translation task guide Summarization task guide BlenderbotSmallConfig class transformers.BlenderbotSmallConfig < source > ( vocab_size = 50265 max_position_embeddings = 512 encoder_layers = 8 encoder_ffn_dim = 2048 encoder_attention_heads = 16 decoder_layers = 8 decoder_ffn_dim = 2048 decoder_attention_heads = 16 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 use_cache = True is_encoder_decoder = True activation_function = 'gelu' d_model = 512 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 decoder_start_token_id = 1 scale_embedding = False pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 forced_eos_token_id = 2 **kwargs ) Parameters vocab_size (int, optional, defaults to 50265) — Vocabulary size of the BlenderbotSmall model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BlenderbotSmallModel or TFBlenderbotSmallModel. d_model (int, optional, defaults to 512) — Dimensionality of the layers and the pooler layer. encoder_layers (int, optional, defaults to 8) — Number of encoder layers. decoder_layers (int, optional, defaults to 8) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. scale_embedding (bool, optional, defaults to False) — Scale embeddings by diving by sqrt(d_model). use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models) forced_eos_token_id (int, optional, defaults to 2) — The id of the token to force as the last generated token when max_length is reached. Usually set to eos_token_id. This is the configuration class to store the configuration of a BlenderbotSmallModel. It is used to instantiate an BlenderbotSmall model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BlenderbotSmall facebook/blenderbot_small-90M architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BlenderbotSmallConfig, BlenderbotSmallModel >>> >>> configuration = BlenderbotSmallConfig() >>> >>> model = BlenderbotSmallModel(configuration) >>> >>> configuration = model.config BlenderbotSmallTokenizer class transformers.BlenderbotSmallTokenizer < source > ( vocab_file merges_file bos_token = '__start__' eos_token = '__end__' unk_token = '__unk__' pad_token = '__null__' **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. merges_file (str) — Path to the merges file. bos_token (str, optional, defaults to "__start__") — The beginning of sentence token. eos_token (str, optional, defaults to "__end__") — The end of sentence token. unk_token (str, optional, defaults to "__unk__") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "__pad__") — The token used for padding, for example when batching sequences of different lengths. **kwargs — Additional keyword arguments passed along to PreTrainedTokenizer Constructs a Blenderbot-90M tokenizer based on BPE (Byte-Pair-Encoding) This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to the superclass for more information regarding methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — The first tokenized sequence. token_ids_1 (List[int], optional) — The second tokenized sequence. The model input with special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. This implementation does not add special tokens and this method should be overridden in a subclass. get_special_tokens_mask < source > ( token_ids_0: typing.List token_ids_1: typing.Optional[typing.List] = None already_has_special_tokens: bool = False ) → A list of integers in the range [0, 1] Parameters token_ids_0 (List[int]) — List of ids of the first sequence. token_ids_1 (List[int], optional) — List of ids of the second sequence. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. Returns A list of integers in the range [0, 1] 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — The first tokenized sequence. token_ids_1 (List[int], optional) — The second tokenized sequence. The token type ids. Create the token type IDs corresponding to the sequences passed. What are token type IDs? Should be overridden in a subclass if the model has a special way of building those. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) BlenderbotSmallTokenizerFast class transformers.BlenderbotSmallTokenizerFast < source > ( vocab_file = None merges_file = None unk_token = '<|endoftext|>' bos_token = '<|endoftext|>' eos_token = '<|endoftext|>' add_prefix_space = False trim_offsets = True **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. Construct a “fast” BlenderbotSmall tokenizer (backed by HuggingFace’s tokenizers library). create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. BlenderbotSmall does not make use of token type ids, therefore a list of zeros is returned. BlenderbotSmallModel class transformers.BlenderbotSmallModel < source > ( config: BlenderbotSmallConfig ) Parameters config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BlenderbotSmall Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BlenderbotSmallModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BlenderbotSmallModel >>> model = BlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt") >>> decoder_inputs = tokenizer("Studies show that", return_tensors="pt") >>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_inputs.input_ids) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 3, 512] BlenderbotSmallForConditionalGeneration class transformers.BlenderbotSmallForConditionalGeneration < source > ( config: BlenderbotSmallConfig ) Parameters config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The BlenderbotSmall Model with a language modeling head. Can be used for summarization. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BlenderbotSmallForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Conversation example: >>> from transformers import AutoTokenizer, BlenderbotSmallForConditionalGeneration >>> mname = "facebook/blenderbot_small-90M" >>> model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname) >>> tokenizer = AutoTokenizer.from_pretrained(mname) >>> UTTERANCE = "My friends are cool but they eat too many carbs." >>> print("Human: ", UTTERANCE) Human: My friends are cool but they eat too many carbs. >>> inputs = tokenizer([UTTERANCE], return_tensors="pt") >>> reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) Bot: what kind of carbs do they eat? i don't know much about carbs. >>> REPLY = "I'm not sure" >>> print("Human: ", REPLY) Human: I'm not sure >>> NEXT_UTTERANCE = ( ... "My friends are cool but they eat too many carbs.__end__ __start__what kind of carbs do they eat? " ... "i don't know much about carbs__end__ " ... "__start__ I'm not sure." ... ) >>> inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt") >>> next_reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0]) Bot: they eat a lot of carbs. carbs are high in fat, protein, and fats. BlenderbotSmallForCausalLM class transformers.BlenderbotSmallForCausalLM < source > ( config ) forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). 1 for tokens that are not masked, 0 for tokens that are masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Example: >>> from transformers import AutoTokenizer, BlenderbotSmallForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> model = BlenderbotSmallForCausalLM.from_pretrained( ... "facebook/blenderbot_small-90M", add_cross_attention=False ... ) >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder." >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size] >>> list(logits.shape) == expected_shape True TFBlenderbotSmallModel class transformers.TFBlenderbotSmallModel < source > ( *args **kwargs ) Parameters config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BLENDERBOT_SMALL Model outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: tf.Tensor | None = None attention_mask: tf.Tensor | None = None decoder_input_ids: tf.Tensor | None = None decoder_attention_mask: tf.Tensor | None = None decoder_position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None decoder_head_mask: tf.Tensor | None = None cross_attn_head_mask: tf.Tensor | None = None encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None past_key_values: List[tf.Tensor] | None = None inputs_embeds: tf.Tensor | None = None decoder_inputs_embeds: tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False **kwargs ) → transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases. decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The TFBlenderbotSmallModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFBlenderbotSmallModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> model = TFBlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFBlenderbotSmallForConditionalGeneration class transformers.TFBlenderbotSmallForConditionalGeneration < source > ( *args **kwargs ) Parameters config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The BLENDERBOT_SMALL Model with a language modeling head. Can be used for summarization. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: tf.Tensor | None = None attention_mask: tf.Tensor | None = None decoder_input_ids: tf.Tensor | None = None decoder_attention_mask: tf.Tensor | None = None decoder_position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None decoder_head_mask: tf.Tensor | None = None cross_attn_head_mask: tf.Tensor | None = None encoder_outputs: Optional[TFBaseModelOutput] = None past_key_values: List[tf.Tensor] | None = None inputs_embeds: tf.Tensor | None = None decoder_inputs_embeds: tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? BlenderbotSmall uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases. decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The TFBlenderbotSmallForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Conversation example:: >>> from transformers import AutoTokenizer, TFBlenderbotSmallForConditionalGeneration >>> mname = "facebook/blenderbot_small-90M" >>> model = BlenderbotSmallForConditionalGeneration.from_pretrained(mname) >>> tokenizer = AutoTokenizer.from_pretrained(mname) >>> UTTERANCE = "My friends are cool but they eat too many carbs." >>> print("Human: ", UTTERANCE) >>> inputs = tokenizer([UTTERANCE], return_tensors="tf") >>> reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) what kind of carbs do they eat? i don't know much about carbs. >>> REPLY = "I'm not sure" >>> print("Human: ", REPLY) >>> NEXT_UTTERANCE = ( ... "My friends are cool but they eat too many carbs.</s> " ... "<s>what kind of carbs do they eat? i don't know much about carbs.</s> " ... "<s>I'm not sure." ... ) >>> inputs = tokenizer([NEXT_UTTERANCE], return_tensors="tf") >>> inputs.pop("token_type_ids") >>> next_reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0]) FlaxBlenderbotSmallModel class transformers.FlaxBlenderbotSmallModel < source > ( config: BlenderbotSmallConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The bare BlenderbotSmall Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor) A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Example: >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallModel >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> model = FlaxBlenderbotSmallModel.from_pretrained("facebook/blenderbot_small-90M") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state encode < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Example: >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="np") >>> encoder_outputs = model.encode(**inputs) decode < source > ( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Example: >>> import jax.numpy as jnp >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="np") >>> encoder_outputs = model.encode(**inputs) >>> decoder_start_token_id = model.config.decoder_start_token_id >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id >>> outputs = model.decode(decoder_input_ids, encoder_outputs) >>> last_decoder_hidden_states = outputs.last_hidden_state FlaxBlenderbotForConditionalGeneration class transformers.FlaxBlenderbotSmallForConditionalGeneration < source > ( config: BlenderbotSmallConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BlenderbotSmallConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The BLENDERBOT_SMALL Model with a language modeling head. Can be used for summarization. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotSmallConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBlenderbotSmallPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Summarization example: >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors="np") >>> >>> summary_ids = model.generate(inputs["input_ids"]).sequences >>> print(tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)) Mask filling example: >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> TXT = "My friends are <mask> but they eat too many carbs." >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> input_ids = tokenizer([TXT], return_tensors="np")["input_ids"] >>> logits = model(input_ids).logits >>> masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() >>> probs = jax.nn.softmax(logits[0, masked_index], axis=0) >>> values, predictions = jax.lax.top_k(probs) >>> tokenizer.decode(predictions).split() encode < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Example: >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="np") >>> encoder_outputs = model.encode(**inputs) decode < source > ( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None deterministic: bool = True params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot_small.configuration_blenderbot_small.BlenderbotSmallConfig'>) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Example: >>> import jax.numpy as jnp >>> from transformers import AutoTokenizer, FlaxBlenderbotSmallForConditionalGeneration >>> model = FlaxBlenderbotSmallForConditionalGeneration.from_pretrained("facebook/blenderbot_small-90M") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot_small-90M") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="np") >>> encoder_outputs = model.encode(**inputs) >>> decoder_start_token_id = model.config.decoder_start_token_id >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id >>> outputs = model.decode(decoder_input_ids, encoder_outputs) >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/blenderbot
Blenderbot DISCLAIMER: If you see something strange, file a Github Issue . Overview The Blender chatbot model was proposed in Recipes for building an open-domain chatbot Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston on 30 Apr 2020. The abstract of the paper is the following: Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models. Tips: Blenderbot is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. This model was contributed by sshleifer. The authors’ code can be found here . Implementation Notes Blenderbot uses a standard seq2seq model transformer based architecture. Available checkpoints can be found in the model hub. This is the default Blenderbot model class. However, some smaller checkpoints, such as facebook/blenderbot_small_90M, have a different architecture and consequently should be used with BlenderbotSmall. Usage Here is an example of model usage: >>> from transformers import BlenderbotTokenizer, BlenderbotForConditionalGeneration >>> mname = "facebook/blenderbot-400M-distill" >>> model = BlenderbotForConditionalGeneration.from_pretrained(mname) >>> tokenizer = BlenderbotTokenizer.from_pretrained(mname) >>> UTTERANCE = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer([UTTERANCE], return_tensors="pt") >>> reply_ids = model.generate(**inputs) >>> print(tokenizer.batch_decode(reply_ids)) ["<s> That's unfortunate. Are they trying to lose weight or are they just trying to be healthier?</s>"] Documentation resources Causal language modeling task guide Translation task guide Summarization task guide BlenderbotConfig class transformers.BlenderbotConfig < source > ( vocab_size = 8008 max_position_embeddings = 128 encoder_layers = 2 encoder_ffn_dim = 10240 encoder_attention_heads = 32 decoder_layers = 24 decoder_ffn_dim = 10240 decoder_attention_heads = 32 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 use_cache = True is_encoder_decoder = True activation_function = 'gelu' d_model = 2560 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 decoder_start_token_id = 1 scale_embedding = False pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 encoder_no_repeat_ngram_size = 3 forced_eos_token_id = 2 **kwargs ) Parameters vocab_size (int, optional, defaults to 50265) — Vocabulary size of the Blenderbot model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BlenderbotModel or TFBlenderbotModel. d_model (int, optional, defaults to 1024) — Dimensionality of the layers and the pooler layer. encoder_layers (int, optional, defaults to 12) — Number of encoder layers. decoder_layers (int, optional, defaults to 12) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 4096) — Dimensionality of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. max_position_embeddings (int, optional, defaults to 128) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. scale_embedding (bool, optional, defaults to False) — Scale embeddings by diving by sqrt(d_model). use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models) forced_eos_token_id (int, optional, defaults to 2) — The id of the token to force as the last generated token when max_length is reached. Usually set to eos_token_id. This is the configuration class to store the configuration of a BlenderbotModel. It is used to instantiate an Blenderbot model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Blenderbot facebook/blenderbot-3B architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BlenderbotConfig, BlenderbotModel >>> >>> configuration = BlenderbotConfig() >>> >>> model = BlenderbotModel(configuration) >>> >>> configuration = model.config BlenderbotTokenizer class transformers.BlenderbotTokenizer < source > ( vocab_file merges_file errors = 'replace' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' add_prefix_space = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (str, optional, defaults to "</s>") — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Blenderbot tokenizer detect beginning of words by the preceding space). Constructs a Blenderbot tokenizer, derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import BlenderbotTokenizer >>> tokenizer = BlenderbotTokenizer.from_pretrained("facebook/blenderbot-3B") >>> tokenizer.add_prefix_space = False >>> tokenizer("Hello world")["input_ids"] [47, 921, 86, 1085, 2] >>> tokenizer(" Hello world")["input_ids"] [6950, 1085, 2] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one). This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added token_ids_1 (List[int], optional) — Will be ignored list of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Blenderbot sequence has the following format: single sequence: X </s> BlenderbotTokenizerFast class transformers.BlenderbotTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None errors = 'replace' bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' add_prefix_space = False trim_offsets = True **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (str, optional, defaults to "</s>") — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Blenderbot tokenizer detect beginning of words by the preceding space). trim_offsets (bool, optional, defaults to True) — Whether the post processing step should trim offsets to avoid including whitespaces. Construct a “fast” Blenderbot tokenizer (backed by HuggingFace’s tokenizers library), derived from the GPT-2 tokenizer, using byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import BlenderbotTokenizerFast >>> tokenizer = BlenderbotTokenizerFast.from_pretrained("facebook/blenderbot-3B") >>> tokenizer("Hello world")["input_ids"] [6950, 1085, 2] >>> tokenizer(" Hello world")["input_ids"] [6950, 1085, 2] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added token_ids_1 (List[int], optional) — Will be ignored list of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Blenderbot sequence has the following format: single sequence: X </s> BlenderbotModel See transformers.BartModel for arguments to forward and generate class transformers.BlenderbotModel < source > ( config: BlenderbotConfig ) Parameters config (BlenderbotConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Blenderbot Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BlenderbotModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BlenderbotModel >>> model = BlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> inputs = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="pt") >>> decoder_input_ids = tokenizer("Studies show that", return_tensors="pt").input_ids >>> outputs = model(input_ids=inputs.input_ids, decoder_input_ids=decoder_input_ids) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 6, 1280] BlenderbotForConditionalGeneration See BartForConditionalGeneration for arguments to forward and generate class transformers.BlenderbotForConditionalGeneration < source > ( config: BlenderbotConfig ) Parameters config (BlenderbotConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Blenderbot Model with a language modeling head. Can be used for summarization. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Union[typing.Tuple, transformers.modeling_outputs.BaseModelOutput, NoneType] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.Tensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. head_mask (torch.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BlenderbotForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Conversation example: >>> from transformers import AutoTokenizer, BlenderbotForConditionalGeneration >>> mname = "facebook/blenderbot-400M-distill" >>> model = BlenderbotForConditionalGeneration.from_pretrained(mname) >>> tokenizer = AutoTokenizer.from_pretrained(mname) >>> UTTERANCE = "My friends are cool but they eat too many carbs." >>> print("Human: ", UTTERANCE) Human: My friends are cool but they eat too many carbs. >>> inputs = tokenizer([UTTERANCE], return_tensors="pt") >>> reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) Bot: That's unfortunate. Are they trying to lose weight or are they just trying to be healthier? >>> REPLY = "I'm not sure" >>> print("Human: ", REPLY) Human: I'm not sure >>> NEXT_UTTERANCE = ( ... "My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. " ... "Are they trying to lose weight or are they just trying to be healthier?</s> " ... "<s> I'm not sure." ... ) >>> inputs = tokenizer([NEXT_UTTERANCE], return_tensors="pt") >>> next_reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0]) Bot: I see. Well, it's good that they're trying to change their eating habits. BlenderbotForCausalLM class transformers.BlenderbotForCausalLM < source > ( config ) forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). 1 for tokens that are not masked, 0 for tokens that are masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Example: >>> from transformers import AutoTokenizer, BlenderbotForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> model = BlenderbotForCausalLM.from_pretrained( ... "facebook/blenderbot-400M-distill", add_cross_attention=False ... ) >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder." >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> expected_shape = [1, inputs.input_ids.shape[-1], model.config.vocab_size] >>> list(logits.shape) == expected_shape True TFBlenderbotModel class transformers.TFBlenderbotModel < source > ( *args **kwargs ) Parameters config (BlenderbotConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BLENDERBOT Model outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: tf.Tensor | None = None attention_mask: tf.Tensor | None = None decoder_input_ids: tf.Tensor | None = None decoder_attention_mask: tf.Tensor | None = None decoder_position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None decoder_head_mask: tf.Tensor | None = None cross_attn_head_mask: tf.Tensor | None = None encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None past_key_values: List[tf.Tensor] | None = None inputs_embeds: tf.Tensor | None = None decoder_inputs_embeds: tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False **kwargs ) → transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases. decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFSeq2SeqModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The TFBlenderbotModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFBlenderbotModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> model = TFBlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFBlenderbotForConditionalGeneration class transformers.TFBlenderbotForConditionalGeneration < source > ( *args **kwargs ) Parameters config (BlenderbotConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The BLENDERBOT Model with a language modeling head. Can be used for summarization. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: tf.Tensor | None = None attention_mask: tf.Tensor | None = None decoder_input_ids: tf.Tensor | None = None decoder_attention_mask: tf.Tensor | None = None decoder_position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None decoder_head_mask: tf.Tensor | None = None cross_attn_head_mask: tf.Tensor | None = None encoder_outputs: Optional[Union[Tuple, TFBaseModelOutput]] = None past_key_values: List[tf.Tensor] | None = None inputs_embeds: tf.Tensor | None = None decoder_inputs_embeds: tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? Blenderbot uses the bos_token_id as the starting token for decoder_input_ids generation. If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). decoder_attention_mask (tf.Tensor of shape (batch_size, target_sequence_length), optional) — will be made by default and ignore pad tokens. It is not recommended to set this for most use cases. decoder_position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (tf.Tensor of shape (encoder_layers, encoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the encoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. decoder_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (tf.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tf.FloatTensor, optional) — hidden states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. of shape (batch_size, sequence_length, hidden_size) is a sequence of past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The TFBlenderbotForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Conversation example:: >>> from transformers import AutoTokenizer, TFBlenderbotForConditionalGeneration >>> mname = "facebook/blenderbot-400M-distill" >>> model = TFBlenderbotForConditionalGeneration.from_pretrained(mname) >>> tokenizer = AutoTokenizer.from_pretrained(mname) >>> UTTERANCE = "My friends are cool but they eat too many carbs." >>> print("Human: ", UTTERANCE) >>> inputs = tokenizer([UTTERANCE], return_tensors="tf") >>> reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]) >>> REPLY = "I'm not sure" >>> print("Human: ", REPLY) >>> NEXT_UTTERANCE = ( ... "My friends are cool but they eat too many carbs.</s> <s>That's unfortunate. " ... "Are they trying to lose weight or are they just trying to be healthier?</s> " ... "<s> I'm not sure." ... ) >>> inputs = tokenizer([NEXT_UTTERANCE], return_tensors="tf") >>> next_reply_ids = model.generate(**inputs) >>> print("Bot: ", tokenizer.batch_decode(next_reply_ids, skip_special_tokens=True)[0]) FlaxBlenderbotModel class transformers.FlaxBlenderbotModel < source > ( config: BlenderbotConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BlenderbotConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare MBart Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBlenderbotPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBlenderbotModel >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> model = FlaxBlenderbotModel.from_pretrained("facebook/blenderbot-400M-distill") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state encode < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Example: >>> from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration >>> model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="jax") >>> encoder_outputs = model.encode(**inputs) decode < source > ( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Example: >>> import jax.numpy as jnp >>> from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration >>> model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="jax") >>> encoder_outputs = model.encode(**inputs) >>> decoder_start_token_id = model.config.decoder_start_token_id >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id >>> outputs = model.decode(decoder_input_ids, encoder_outputs) >>> last_decoder_hidden_states = outputs.last_hidden_state FlaxBlenderbotForConditionalGeneration class transformers.FlaxBlenderbotForConditionalGeneration < source > ( config: BlenderbotConfig input_shape: typing.Tuple[int] = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BlenderbotConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Blenderbot Model with a language modeling head. Can be used for summarization. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BlenderbotConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBlenderbotPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Conversation example:: >>> from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration >>> model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> UTTERANCE = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer([UTTERANCE], max_length=1024, return_tensors="np") >>> >>> reply_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=5, early_stopping=True).sequences >>> print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in reply_ids]) encode < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Example: >>> from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration >>> model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="jax") >>> encoder_outputs = model.encode(**inputs) decode < source > ( decoder_input_ids encoder_outputs encoder_attention_mask: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None past_key_values: dict = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length)) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper. encoder_outputs (tuple(tuple(jnp.ndarray)) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. encoder_attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blenderbot.configuration_blenderbot.BlenderbotConfig'>) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Example: >>> import jax.numpy as jnp >>> from transformers import AutoTokenizer, FlaxBlenderbotForConditionalGeneration >>> model = FlaxBlenderbotForConditionalGeneration.from_pretrained("facebook/blenderbot-400M-distill") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/blenderbot-400M-distill") >>> text = "My friends are cool but they eat too many carbs." >>> inputs = tokenizer(text, max_length=1024, return_tensors="jax") >>> encoder_outputs = model.encode(**inputs) >>> decoder_start_token_id = model.config.decoder_start_token_id >>> decoder_input_ids = jnp.ones((inputs.input_ids.shape[0], 1), dtype="i4") * decoder_start_token_id >>> outputs = model.decode(decoder_input_ids, encoder_outputs) >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/big_bird
BigBird Overview The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it has been shown that applying sparse, global, and random attention approximates full attention, while being computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, BigBird has shown improved performance on various long document NLP tasks, such as question answering and summarization, compared to BERT or RoBERTa. The abstract from the paper is the following: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. Tips: For an in-detail explanation on how BigBird’s attention works, see this blog post. BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using original_full is advised as there is no benefit in using block_sparse attention. The code currently uses window size of 3 blocks and 2 global blocks. Sequence length must be divisible by block size. Current implementation supports only ITC. Current implementation doesn’t support num_random_blocks = 0 BigBird is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. This model was contributed by vasudevgupta. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide BigBirdConfig class transformers.BigBirdConfig < source > ( vocab_size = 50358 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu_new' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 4096 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 use_cache = True pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 sep_token_id = 66 attention_type = 'block_sparse' use_bias = True rescale_embeddings = False block_size = 64 num_random_blocks = 3 classifier_dropout = None **kwargs ) Parameters vocab_size (int, optional, defaults to 50358) — Vocabulary size of the BigBird model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BigBirdModel. hidden_size (int, optional, defaults to 768) — Dimension of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu_new") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 4096) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 1024 or 2048 or 4096). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling BigBirdModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. attention_type (str, optional, defaults to "block_sparse") — Whether to use block sparse attention (with n complexity) as introduced in paper or original attention layer (with n^2 complexity). Possible values are "original_full" and "block_sparse". use_bias (bool, optional, defaults to True) — Whether to use bias in query, key, value. rescale_embeddings (bool, optional, defaults to False) — Whether to rescale embeddings with (hidden_size ** 0.5). block_size (int, optional, defaults to 64) — Size of each block. Useful only when attention_type == "block_sparse". num_random_blocks (int, optional, defaults to 3) — Each query is going to attend these many number of random blocks. Useful only when attention_type == "block_sparse". classifier_dropout (float, optional) — The dropout ratio for the classification head. This is the configuration class to store the configuration of a BigBirdModel. It is used to instantiate an BigBird model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BigBird google/bigbird-roberta-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BigBirdConfig, BigBirdModel >>> >>> configuration = BigBirdConfig() >>> >>> model = BigBirdModel(configuration) >>> >>> configuration = model.config BigBirdTokenizer class transformers.BigBirdTokenizer < source > ( vocab_file unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' pad_token = '<pad>' sep_token = '[SEP]' mask_token = '[MASK]' cls_token = '[CLS]' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. eos_token (str, optional, defaults to "</s>") — The end of sequence token. bos_token (str, optional, defaults to "<s>") — The begin of sequence token. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set: enable_sampling: Enable subword regularization. nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout. nbest_size = {0,1}: No sampling is performed. nbest_size > 1: samples from the nbest_size results. nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Construct a BigBird tokenizer. Based on SentencePiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Big Bird sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence pair mask has the following format: :: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) BigBirdTokenizerFast class transformers.BigBirdTokenizerFast < source > ( vocab_file = None tokenizer_file = None unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' pad_token = '<pad>' sep_token = '[SEP]' mask_token = '[MASK]' cls_token = '[CLS]' **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (str, optional, defaults to "</s>") — The end of sequence token. .. note:: When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. Construct a “fast” BigBird tokenizer (backed by HuggingFace’s tokenizers library). Based on Unigram. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. list of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An BigBird sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of ids. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | if token_ids_1 is None, only returns the first portion of the mask (0s). get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of ids. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Set to True if the token list is already formatted with special tokens for the model A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. BigBird specific outputs class transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None prediction_logits: FloatTensor = None seq_relationship_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of BigBirdForPreTraining. BigBirdModel class transformers.BigBirdModel < source > ( config add_pooling_layer = True ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BigBird Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The BigBirdModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BigBirdModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdModel.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state BigBirdForPreTraining class transformers.BigBirdForPreTraining < source > ( config ) forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.FloatTensor] = None next_sentence_label: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] next_sentence_label (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. If specified, nsp loss will be added to masked_lm loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]: 0 indicates sequence B is a continuation of sequence A, 1 indicates sequence B is a random sequence. kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. A transformers.models.big_bird.modeling_big_bird.BigBirdForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BigBirdForPreTraining >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.prediction_logits >>> seq_relationship_logits = outputs.seq_relationship_logits BigBirdForCausalLM class transformers.BigBirdForCausalLM < source > ( config ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBird Model with a language modeling head on top for CLM fine-tuning. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The BigBirdForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, BigBirdForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdForCausalLM.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits BigBirdForMaskedLM class transformers.BigBirdForMaskedLM < source > ( config ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBird Model with a language modeling head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, BigBirdForMaskedLM >>> from datasets import load_dataset >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base") >>> squad_ds = load_dataset("squad_v2", split="train") >>> >>> LONG_ARTICLE_TARGET = squad_ds[81514]["context"] >>> >>> LONG_ARTICLE_TARGET[332:398] 'the highest values are very close to the theoretical maximum value' >>> >>> LONG_ARTICLE_TO_MASK = LONG_ARTICLE_TARGET.replace("maximum", "[MASK]") >>> inputs = tokenizer(LONG_ARTICLE_TO_MASK, return_tensors="pt") >>> >>> list(inputs["input_ids"].shape) [1, 919] >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> tokenizer.decode(predicted_token_id) 'maximum' >>> labels = tokenizer(LONG_ARTICLE_TARGET, return_tensors="pt")["input_ids"] >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 1.99 BigBirdForSequenceClassification class transformers.BigBirdForSequenceClassification < source > ( config ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBird Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, BigBirdForSequenceClassification >>> from datasets import load_dataset >>> tokenizer = AutoTokenizer.from_pretrained("l-yohai/bigbird-roberta-base-mnli") >>> model = BigBirdForSequenceClassification.from_pretrained("l-yohai/bigbird-roberta-base-mnli") >>> squad_ds = load_dataset("squad_v2", split="train") >>> LONG_ARTICLE = squad_ds[81514]["context"] >>> inputs = tokenizer(LONG_ARTICLE, return_tensors="pt") >>> >>> list(inputs["input_ids"].shape) [1, 919] >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'LABEL_0' >>> num_labels = len(model.config.id2label) >>> model = BigBirdForSequenceClassification.from_pretrained( ... "l-yohai/bigbird-roberta-base-mnli", num_labels=num_labels ... ) >>> labels = torch.tensor(1) >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 1.13 BigBirdForMultipleChoice class transformers.BigBirdForMultipleChoice < source > ( config ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBird Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BigBirdForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdForMultipleChoice.from_pretrained("google/bigbird-roberta-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits BigBirdForTokenClassification class transformers.BigBirdForTokenClassification < source > ( config ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBird Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BigBirdForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdForTokenClassification.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss BigBirdForQuestionAnswering class transformers.BigBirdForQuestionAnswering < source > ( config add_pooling_layer = False ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBird Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None question_lengths: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. Returns transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor) A transformers.models.big_bird.modeling_big_bird.BigBirdForQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). pooler_output (torch.FloatTensor of shape (batch_size, 1)) — pooler output from BigBigModel hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, BigBirdForQuestionAnswering >>> from datasets import load_dataset >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = BigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base") >>> squad_ds = load_dataset("squad_v2", split="train") >>> >>> LONG_ARTICLE = squad_ds[81514]["context"] >>> QUESTION = squad_ds[81514]["question"] >>> QUESTION 'During daytime how high can the temperatures reach?' >>> inputs = tokenizer(QUESTION, LONG_ARTICLE, return_tensors="pt") >>> >>> list(inputs["input_ids"].shape) [1, 929] >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_token_ids = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> predict_answer_token = tokenizer.decode(predict_answer_token_ids) >>> target_start_index, target_end_index = torch.tensor([130]), torch.tensor([132]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss FlaxBigBirdModel class transformers.FlaxBigBirdModel < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The bare BigBird Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdModel >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdModel.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxBigBirdForPreTraining class transformers.FlaxBigBirdForPreTraining < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForPreTrainingOutput or tuple(torch.FloatTensor) A transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. prediction_logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForPreTraining >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForPreTraining.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> prediction_logits = outputs.prediction_logits >>> seq_relationship_logits = outputs.seq_relationship_logits FlaxBigBirdForCausalLM class transformers.FlaxBigBirdForCausalLM < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for autoregressive tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForCausalLM.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1] FlaxBigBirdForMaskedLM class transformers.FlaxBigBirdForMaskedLM < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model with a language modeling head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForMaskedLM.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxBigBirdForSequenceClassification class transformers.FlaxBigBirdForSequenceClassification < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForSequenceClassification.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxBigBirdForMultipleChoice class transformers.FlaxBigBirdForMultipleChoice < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForMultipleChoice >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForMultipleChoice.from_pretrained("google/bigbird-roberta-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True) >>> outputs = model(**{k: v[None, :] for k, v in encoding.items()}) >>> logits = outputs.logits FlaxBigBirdForTokenClassification class transformers.FlaxBigBirdForTokenClassification < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForTokenClassification.from_pretrained("google/bigbird-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxBigBirdForQuestionAnswering class transformers.FlaxBigBirdForQuestionAnswering < source > ( config: BigBirdConfig input_shape: typing.Optional[tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (BigBirdConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). BigBird Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None question_lengths = None params: dict = None dropout_rng: typing.Optional[PRNGKey] = None indices_rng: typing.Optional[PRNGKey] = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForQuestionAnsweringModelOutput or tuple(torch.FloatTensor) A transformers.models.big_bird.modeling_flax_big_bird.FlaxBigBirdForQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdConfig) and inputs. start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). pooled_output (jnp.ndarray of shape (batch_size, hidden_size)) — pooled_output returned by FlaxBigBirdModel. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBigBirdForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBigBirdForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-roberta-base") >>> model = FlaxBigBirdForQuestionAnswering.from_pretrained("google/bigbird-roberta-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="jax") >>> outputs = model(**inputs) >>> start_scores = outputs.start_logits >>> end_scores = outputs.end_logits
https://huggingface.co/docs/transformers/model_doc/blip
BLIP Overview The BLIP model was proposed in BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation by Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi. BLIP is a model that is able to perform various multi-modal tasks including Visual Question Answering Image-Text retrieval (Image-text matching) Image Captioning The abstract from the paper is the following: Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released. This model was contributed by ybelkada. The original code can be found here. Resources Jupyter notebook on how to fine-tune BLIP for image captioning on a custom dataset BlipConfig class transformers.BlipConfig < source > ( text_config = None vision_config = None projection_dim = 512 logit_scale_init_value = 2.6592 image_text_hidden_size = 256 **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize BlipTextConfig. vision_config (dict, optional) — Dictionary of configuration options used to initialize BlipVisionConfig. projection_dim (int, optional, defaults to 512) — Dimentionality of text and vision projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original BLIP implementation. image_text_hidden_size (int, optional, defaults to 768) — Dimentionality of the hidden state of the image-text fusion layer. kwargs (optional) — Dictionary of keyword arguments. BlipConfig is the configuration class to store the configuration of a BlipModel. It is used to instantiate a BLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the BLIP-base Salesforce/blip-vqa-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BlipConfig, BlipModel >>> >>> configuration = BlipConfig() >>> >>> model = BlipModel(configuration) >>> >>> configuration = model.config >>> >>> >>> config_text = BlipTextConfig() >>> config_vision = BlipVisionConfig() >>> config = BlipConfig.from_text_vision_configs(config_text, config_vision) from_text_vision_configs < source > ( text_config: BlipTextConfig vision_config: BlipVisionConfig **kwargs ) → BlipConfig An instance of a configuration object Instantiate a BlipConfig (or a derived class) from blip text model configuration and blip vision model configuration. BlipTextConfig class transformers.BlipTextConfig < source > ( vocab_size = 30524 hidden_size = 768 encoder_hidden_size = 768 intermediate_size = 3072 projection_dim = 768 num_hidden_layers = 12 num_attention_heads = 8 max_position_embeddings = 512 hidden_act = 'gelu' layer_norm_eps = 1e-12 hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 bos_token_id = 30522 eos_token_id = 2 pad_token_id = 0 sep_token_id = 102 is_decoder = True use_cache = True **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the Blip text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BlipModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. encoder_hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers from the vision model. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (int, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. bos_token_id (int, optional, defaults to 30522) — The id of the beginning-of-sequence token. eos_token_id (int, optional, defaults to 2) — The id of the end-of-sequence token. pad_token_id (int, optional, defaults to 0) — The id of the padding token. sep_token_id (int, optional, defaults to 102) — The id of the separator token. is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a BlipTextModel. It is used to instantiate a BLIP text model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BlipText used by the base architectures. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BlipTextConfig, BlipTextModel >>> >>> configuration = BlipTextConfig() >>> >>> model = BlipTextModel(configuration) >>> >>> configuration = model.config BlipVisionConfig class transformers.BlipVisionConfig < source > ( hidden_size = 768 intermediate_size = 3072 projection_dim = 512 num_hidden_layers = 12 num_attention_heads = 12 image_size = 384 patch_size = 16 hidden_act = 'gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 1e-10 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 32) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. This is the configuration class to store the configuration of a BlipVisionModel. It is used to instantiate a BLIP vision model according to the specified arguments, defining the model architecture. Instantiating a configuration defaults will yield a similar configuration to that of the Blip-base Salesforce/blip-vqa-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BlipVisionConfig, BlipVisionModel >>> >>> configuration = BlipVisionConfig() >>> >>> model = BlipVisionModel(configuration) >>> >>> configuration = model.config BlipProcessor class transformers.BlipProcessor < source > ( image_processor tokenizer ) Parameters image_processor (BlipImageProcessor) — An instance of BlipImageProcessor. The image processor is a required input. tokenizer (BertTokenizerFast) — An instance of [‘BertTokenizerFast`]. The tokenizer is a required input. Constructs a BLIP processor which wraps a BERT tokenizer and BLIP image processor into a single processor. BlipProcessor offers all the functionalities of BlipImageProcessor and BertTokenizerFast. See the docstring of __call__() and decode() for more information. This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to the docstring of this method for more information. BlipImageProcessor class transformers.BlipImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BICUBIC: 3> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method. size (dict, optional, defaults to {"height" -- 384, "width": 384}): Size of the output image after resizing. Can be overridden by the size parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. Can be overridden by the resample parameter in the preprocess method. do_rescale (bool, optional, defaults to True) — Wwhether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Only has an effect if do_rescale is set to True. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Can be overridden by the image_std parameter in the preprocess method. do_convert_rgb (bool, optional, defaults to True) — Whether to convert the image to RGB. Constructs a BLIP image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample: Resampling = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None do_convert_rgb: bool = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Controls the size of the image after resize. The shortest edge of the image is resized to size["shortest_edge"] whilst preserving the aspect ratio. If the longest edge of this resized image is > int(size["shortest_edge"] * (1333 / 800)), then the image is resized again to make the longest edge equal to int(size["shortest_edge"] * (1333 / 800)). resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use if resizing the image. Only has an effect if do_resize is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to normalize the image by if do_normalize is set to True. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to normalize the image by if do_normalize is set to True. do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) — Whether to convert the image to RGB. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. BlipModel class transformers.BlipModel < source > ( config: BlipConfig ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.blip.modeling_blip.BlipOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_blip.BlipOutput or tuple(torch.FloatTensor) A transformers.models.blip.modeling_blip.BlipOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of BlipTextModel. image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of BlipVisionModel. text_model_output(BaseModelOutputWithPooling): The output of the BlipTextModel. vision_model_output(BaseModelOutputWithPooling): The output of the BlipVisionModel. The BlipModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, BlipModel >>> model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (torch.FloatTensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of BlipTextModel. The BlipModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoProcessor, BlipModel >>> model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns image_features (torch.FloatTensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of BlipVisionModel. The BlipModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, BlipModel >>> model = BlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> image_features = model.get_image_features(**inputs) BlipTextModel class transformers.BlipTextModel < source > ( config add_pooling_layer = True ) The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. argument and is_decoder set to True; an encoder_hidden_states is then expected as an input to the forward pass. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None is_decoder: typing.Optional[bool] = False ) encoder_hidden_states (torch.FloatTensor, optional): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor, optional): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)), optional): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional): If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). BlipVisionModel class transformers.BlipVisionModel < source > ( config: BlipVisionConfig ) forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BlipVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. BlipForConditionalGeneration class transformers.BlipForConditionalGeneration < source > ( config: BlipConfig ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP Model for image captioning. The model consists of a vision encoder and a text decoder. One can optionally pass input_ids to the model, which serve as a text prompt, to make the text decoder continue the prompt. Otherwise, the decoder starts generating text from the [BOS] (beginning-of-sequence) token. will start generating the caption from the text input. If no text input is provided, the decoder will start with the [BOS] token only. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None return_dict: typing.Optional[bool] = None ) → transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput or tuple(torch.FloatTensor) A transformers.models.blip.modeling_blip.BlipForConditionalGenerationModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. loss (torch.FloatTensor, optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Languge modeling loss from the text decoder. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size), optional) — Prediction scores of the language modeling head of the text decoder model. image_embeds (torch.FloatTensor of shape (batch_size, output_dim), optional) — The image embeddings obtained after applying the Vision Transformer model to the input image. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BlipForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, BlipForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text = "A picture of" >>> inputs = processor(images=image, text=text, return_tensors="pt") >>> outputs = model(**inputs) BlipForImageTextRetrieval class transformers.BlipForImageTextRetrieval < source > ( config: BlipConfig ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP Model with a vision and text projector, and a classification head on top. The model is used in the context of image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to the image. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor pixel_values: FloatTensor use_itm_head: typing.Optional[bool] = True attention_mask: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor) A transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder. image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BlipForImageTextRetrieval forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, BlipForImageTextRetrieval >>> model = BlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text = "an image of a cat" >>> inputs = processor(images=image, text=text, return_tensors="pt") >>> outputs = model(**inputs) BlipForQuestionAnswering class transformers.BlipForQuestionAnswering < source > ( config: BlipConfig ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text decoder. The vision encoder will encode the input image, the text encoder will encode the input question together with the encoding of the image, and the text decoder will output the answer to the question. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor pixel_values: FloatTensor decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None return_dict: typing.Optional[bool] = None ) → transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or tuple(torch.FloatTensor) A transformers.models.blip.modeling_blip.BlipTextVisionModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder. image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BlipForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, BlipForQuestionAnswering >>> model = BlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> >>> text = "How many cats are in the picture?" >>> label = "2" >>> inputs = processor(images=image, text=text, return_tensors="pt") >>> labels = processor(text=label, return_tensors="pt").input_ids >>> inputs["labels"] = labels >>> outputs = model(**inputs) >>> loss = outputs.loss >>> loss.backward() >>> >>> text = "How many cats are in the picture?" >>> inputs = processor(images=image, text=text, return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(processor.decode(outputs[0], skip_special_tokens=True)) 2 TFBlipModel class transformers.TFBlipModel < source > ( *args **kwargs ) call < source > ( input_ids: tf.Tensor | None = None pixel_values: tf.Tensor | None = None attention_mask: tf.Tensor | None = None position_ids: tf.Tensor | None = None return_loss: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = None ) → transformers.models.blip.modeling_tf_blip.TFBlipOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_tf_blip.TFBlipOutput or tuple(tf.Tensor) A transformers.models.blip.modeling_tf_blip.TFBlipOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipConfig'>) and inputs. loss (tf.Tensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image:(tf.Tensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(tf.Tensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(tf.Tensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of BlipTextModel. image_embeds(tf.Tensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of BlipVisionModel. text_model_output(BaseModelOutputWithPooling): The output of the BlipTextModel. vision_model_output(BaseModelOutputWithPooling): The output of the BlipVisionModel. The TFBlipModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFBlipModel >>> model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = tf.nn.softmax(logits_per_image, axis=1) get_text_features < source > ( input_ids: tf.Tensor | None = None attention_mask: tf.Tensor | None = None position_ids: tf.Tensor | None = None return_dict: Optional[bool] = None ) → text_features (tf.Tensor of shape (batch_size, output_dim) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (tf.Tensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of TFBlipTextModel. The TFBlipModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoProcessor, TFBlipModel >>> model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: tf.Tensor | None = None return_dict: Optional[bool] = None ) → image_features (tf.Tensor of shape (batch_size, output_dim) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns image_features (tf.Tensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of TFBlipVisionModel. The TFBlipModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFBlipModel >>> model = TFBlipModel.from_pretrained("Salesforce/blip-image-captioning-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="tf") >>> image_features = model.get_image_features(**inputs) TFBlipTextModel class transformers.TFBlipTextModel < source > ( *args **kwargs ) The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. argument and is_decoder set to True; an encoder_hidden_states is then expected as an input to the forward pass. call < source > ( input_ids: TFModelInputType | None = None attention_mask: tf.Tensor | None = None position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None encoder_embeds: tf.Tensor | None = None encoder_hidden_states: tf.Tensor | None = None encoder_attention_mask: tf.Tensor | None = None past_key_values: Tuple[Tuple[tf.Tensor]] | None = None use_cache: bool | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None is_decoder: bool = False training: bool = False ) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoProcessor. See BlipProcessor.__call__() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (tf.Tensor, optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor, optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(tf.Tensor)), optional) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). The TFBlipTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. TFBlipVisionModel class transformers.TFBlipVisionModel < source > ( *args **kwargs ) call < source > ( pixel_values: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = None ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFBlipVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. TFBlipForConditionalGeneration class transformers.TFBlipForConditionalGeneration < source > ( *args **kwargs ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP Model for image captioning. The model consists of a vision encoder and a text decoder. One can optionally pass input_ids to the model, which serve as a text prompt, to make the text decoder continue the prompt. Otherwise, the decoder starts generating text from the [BOS] (beginning-of-sequence) token. will start generating the caption from the text input. If no text input is provided, the decoder will start with the [BOS] token only. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. call < source > ( pixel_values: tf.Tensor input_ids: tf.Tensor | None = None attention_mask: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None labels: tf.Tensor | None = None return_dict: Optional[bool] = None training: Optional[bool] = None ) → transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGenerationModelOutput or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGenerationModelOutput or tuple(tf.Tensor) A transformers.models.blip.modeling_tf_blip.TFBlipForConditionalGenerationModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipConfig'>) and inputs. loss (tf.Tensor, optional, returned when labels is provided, tf.Tensor of shape (1,)) — Languge modeling loss from the text decoder. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size), optional) — Prediction scores of the language modeling head of the text decoder model. image_embeds (tf.Tensor of shape (batch_size, output_dim), optional) — The image embeddings obtained after applying the Vision Transformer model to the input image. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.` The TFBlipForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFBlipForConditionalGeneration >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-image-captioning-base") >>> model = TFBlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text = "A picture of" >>> inputs = processor(images=image, text=text, return_tensors="tf") >>> outputs = model(**inputs) TFBlipForImageTextRetrieval class transformers.TFBlipForImageTextRetrieval < source > ( *args **kwargs ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP Model with a vision and text projector, and a classification head on top. The model is used in the context of image-text retrieval. Given an image and a text, the model returns the probability of the text being relevant to the image. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. call < source > ( input_ids: tf.Tensor pixel_values: tf.Tensor | None = None use_itm_head: Optional[bool] = True attention_mask: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = None ) → transformers.models.blip.modeling_tf_blip.TFBlipImageTextMatchingModelOutput or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_tf_blip.TFBlipImageTextMatchingModelOutput or tuple(tf.Tensor) A transformers.models.blip.modeling_tf_blip.TFBlipImageTextMatchingModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. itm_score (tf.Tensor) — The image-text similarity scores. loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder. image_embeds (tf.Tensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. vision_pooler_output (tf.Tensor of shape (batch_size, hidden_size), optional) — Last layer hidden-state of the vision of the vision-only branch of the model. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. question_embeds (tf.Tensor) — The question embeddings obtained by the text projection layer. The TFBlipForImageTextRetrieval forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFBlipForImageTextRetrieval >>> model = TFBlipForImageTextRetrieval.from_pretrained("Salesforce/blip-itm-base-coco") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-itm-base-coco") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text = "an image of a cat" >>> inputs = processor(images=image, text=text, return_tensors="tf") >>> outputs = model(**inputs) TFBlipForQuestionAnswering class transformers.TFBlipForQuestionAnswering < source > ( *args **kwargs ) Parameters config (BlipConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BLIP Model for visual question answering. The model consists of a vision encoder, a text encoder as well as a text decoder. The vision encoder will encode the input image, the text encoder will encode the input question together with the encoding of the image, and the text decoder will output the answer to the question. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. call < source > ( input_ids: tf.Tensor pixel_values: tf.Tensor | None = None decoder_input_ids: tf.Tensor | None = None decoder_attention_mask: tf.Tensor | None = None attention_mask: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None labels: tf.Tensor | None = None return_dict: Optional[bool] = None training: Optional[bool] = None ) → transformers.models.blip.modeling_tf_blip.TFBlipTextVisionModelOutput or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using BlipImageProcessor. See BlipImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.blip.modeling_tf_blip.TFBlipTextVisionModelOutput or tuple(tf.Tensor) A transformers.models.blip.modeling_tf_blip.TFBlipTextVisionModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.blip.configuration_blip.BlipVisionConfig'>) and inputs. loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Languge modeling loss from the text decoder. image_embeds (tf.Tensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFBlipForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFBlipForQuestionAnswering >>> model = TFBlipForQuestionAnswering.from_pretrained("Salesforce/blip-vqa-base") >>> processor = AutoProcessor.from_pretrained("Salesforce/blip-vqa-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> >>> text = "How many cats are in the picture?" >>> label = "2" >>> inputs = processor(images=image, text=text, return_tensors="tf") >>> labels = processor(text=label, return_tensors="tf").input_ids >>> inputs["labels"] = labels >>> outputs = model(**inputs) >>> loss = outputs.loss >>> >>> text = "How many cats are in the picture?" >>> inputs = processor(images=image, text=text, return_tensors="tf") >>> outputs = model.generate(**inputs) >>> print(processor.decode(outputs[0], skip_special_tokens=True)) 2
https://huggingface.co/docs/transformers/model_doc/bigbird_pegasus
BigBirdPegasus Overview The BigBird model was proposed in Big Bird: Transformers for Longer Sequences by Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon, Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it has been shown that applying sparse, global, and random attention approximates full attention, while being computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context, BigBird has shown improved performance on various long document NLP tasks, such as question answering and summarization, compared to BERT or RoBERTa. The abstract from the paper is the following: Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP. Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to 8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context, BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also propose novel applications to genomics data. Tips: For an in-detail explanation on how BigBird’s attention works, see this blog post. BigBird comes with 2 implementations: original_full & block_sparse. For the sequence length < 1024, using original_full is advised as there is no benefit in using block_sparse attention. The code currently uses window size of 3 blocks and 2 global blocks. Sequence length must be divisible by block size. Current implementation supports only ITC. Current implementation doesn’t support num_random_blocks = 0. BigBirdPegasus uses the PegasusTokenizer. BigBird is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. The original code can be found here. Documentation resources Text classification task guide Question answering task guide Causal language modeling task guide Translation task guide Summarization task guide BigBirdPegasusConfig class transformers.BigBirdPegasusConfig < source > ( vocab_size = 96103 max_position_embeddings = 4096 encoder_layers = 16 encoder_ffn_dim = 4096 encoder_attention_heads = 16 decoder_layers = 16 decoder_ffn_dim = 4096 decoder_attention_heads = 16 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 use_cache = True is_encoder_decoder = True activation_function = 'gelu_new' d_model = 1024 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 decoder_start_token_id = 2 classifier_dropout = 0.0 scale_embedding = True pad_token_id = 0 bos_token_id = 2 eos_token_id = 1 attention_type = 'block_sparse' block_size = 64 num_random_blocks = 3 use_bias = False **kwargs ) Parameters vocab_size (int, optional, defaults to 96103) — Vocabulary size of the BigBirdPegasus model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling BigBirdPegasusModel. d_model (int, optional, defaults to 1024) — Dimension of the layers and the pooler layer. encoder_layers (int, optional, defaults to 16) — Number of encoder layers. decoder_layers (int, optional, defaults to 16) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 4096) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 4096) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "gelu_new") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. classifier_dropout (float, optional, defaults to 0.0) — The dropout ratio for classifier. max_position_embeddings (int, optional, defaults to 4096) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 1024 or 2048 or 4096). init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). attention_type (str, optional, defaults to "block_sparse") — Whether to use block sparse attention (with n complexity) as introduced in paper or original attention layer (with n^2 complexity) in encoder. Possible values are "original_full" and "block_sparse". use_bias (bool, optional, defaults to False) — Whether to use bias in query, key, value. block_size (int, optional, defaults to 64) — Size of each block. Useful only when attention_type == "block_sparse". num_random_blocks (int, optional, defaults to 3) — Each query is going to attend these many number of random blocks. Useful only when attention_type == "block_sparse". scale_embeddings (bool, optional, defaults to True) — Whether to rescale embeddings with (hidden_size ** 0.5). This is the configuration class to store the configuration of a BigBirdPegasusModel. It is used to instantiate an BigBirdPegasus model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the BigBirdPegasus google/bigbird-pegasus-large-arxiv architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BigBirdPegasusConfig, BigBirdPegasusModel >>> >>> configuration = BigBirdPegasusConfig() >>> >>> model = BigBirdPegasusModel(configuration) >>> >>> configuration = model.config BigBirdPegasusModel class transformers.BigBirdPegasusModel < source > ( config: BigBirdPegasusConfig ) Parameters config (BigBirdPegasusConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare BigBirdPegasus Model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper. decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.Seq2SeqModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdPegasusConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdPegasusModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BigBirdPegasusModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> model = BigBirdPegasusModel.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state BigBirdPegasusForConditionalGeneration class transformers.BigBirdPegasusForConditionalGeneration < source > ( config: BigBirdPegasusConfig ) Parameters config (BigBirdPegasusConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The BigBirdPegasus Model with a language modeling head. Can be used for summarization. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper. decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdPegasusConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdPegasusForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Summarization example: >>> from transformers import AutoTokenizer, BigBirdPegasusForConditionalGeneration >>> model = BigBirdPegasusForConditionalGeneration.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> ARTICLE_TO_SUMMARIZE = ( ... "The dominant sequence transduction models are based on complex recurrent or convolutional neural " ... "networks in an encoder-decoder configuration. The best performing models also connect the encoder " ... "and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, " ... "based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. " ... "Experiments on two machine translation tasks show these models to be superior in quality " ... "while being more parallelizable and requiring significantly less time to train." ... ) >>> inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=4096, return_tensors="pt", truncation=True) >>> >>> summary_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=15) >>> tokenizer.batch_decode(summary_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] 'dominant sequence models are based on recurrent or convolutional neural networks .' BigBirdPegasusForSequenceClassification class transformers.BigBirdPegasusForSequenceClassification < source > ( config: BigBirdPegasusConfig **kwargs ) Parameters config (BigBirdPegasusConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBirdPegasus model with a sequence classification/head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper. decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdPegasusConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdPegasusForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, BigBirdPegasusForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> model = BigBirdPegasusForSequenceClassification.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = BigBirdPegasusForSequenceClassification.from_pretrained("google/bigbird-pegasus-large-arxiv", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, BigBirdPegasusForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> model = BigBirdPegasusForSequenceClassification.from_pretrained("google/bigbird-pegasus-large-arxiv", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = BigBirdPegasusForSequenceClassification.from_pretrained( ... "google/bigbird-pegasus-large-arxiv", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss BigBirdPegasusForQuestionAnswering class transformers.BigBirdPegasusForQuestionAnswering < source > ( config ) Parameters config (BigBirdPegasusConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. BigBirdPegasus Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: Tensor = None attention_mask: typing.Optional[torch.Tensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None decoder_head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None encoder_outputs: typing.Optional[typing.List[torch.FloatTensor]] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper. decoder_attention_mask (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. If you want to change padding behavior, you should read modeling_bigbird_pegasus._prepare_decoder_attention_mask and modify to your needs. See diagram 1 in the paper for more information on the default strategy. decoder_head_mask (torch.Tensor of shape (num_layers, num_heads), optional) — Mask to nullify selected heads of the attention modules in the decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If decoder_input_ids and decoder_inputs_embeds are both unset, decoder_inputs_embeds takes the value of inputs_embeds. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdPegasusConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The BigBirdPegasusForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BigBirdPegasusForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> model = BigBirdPegasusForQuestionAnswering.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss BigBirdPegasusForCausalLM class transformers.BigBirdPegasusForCausalLM < source > ( config ) forward < source > ( input_ids: LongTensor = None attention_mask: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.Tensor] = None cross_attn_head_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. cross_attn_head_mask (torch.Tensor of shape (decoder_layers, decoder_attention_heads), optional) — Mask to nullify selected heads of the cross-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should either be in [0, ..., config.vocab_size] or -100 (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). 1 for tokens that are not masked, 0 for tokens that are masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BigBirdPegasusConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Example: >>> from transformers import AutoTokenizer, BigBirdPegasusForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("google/bigbird-pegasus-large-arxiv") >>> model = BigBirdPegasusForCausalLM.from_pretrained( ... "google/bigbird-pegasus-large-arxiv", add_cross_attention=False ... ) >>> assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder." >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/bloom
BLOOM Overview The BLOOM model has been proposed with its various versions through the BigScience Workshop. BigScience is inspired by other open science initiatives where researchers have pooled their time and resources to collectively achieve a higher impact. The architecture of BLOOM is essentially similar to GPT3 (auto-regressive model for next token prediction), but has been trained on 46 different languages and 13 programming languages. Several smaller versions of the models have been trained on the same dataset. BLOOM is available in the following versions: bloom-560m bloom-1b1 bloom-1b7 bloom-3b bloom-7b1 bloom (176B parameters) Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with BLOOM. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Text Generation BloomForCausalLM is supported by this causal language modeling example script and notebook. See also: Causal language modeling task guide Text classification task guide Token classification task guide Question answering task guide ⚡️ Inference A blog on Optimization story: Bloom inference. A blog on Incredibly Fast BLOOM Inference with DeepSpeed and Accelerate. ⚙️ Training A blog on The Technology Behind BLOOM Training. BloomConfig class transformers.BloomConfig < source > ( vocab_size = 250880 hidden_size = 64 n_layer = 2 n_head = 8 layer_norm_epsilon = 1e-05 initializer_range = 0.02 use_cache = True bos_token_id = 1 eos_token_id = 2 apply_residual_connection_post_layernorm = False hidden_dropout = 0.0 attention_dropout = 0.0 pretraining_tp = 1 slow_but_exact = False **kwargs ) Parameters vocab_size (int, optional, defaults to 250880) — Vocabulary size of the Bloom model. Defines the maximum number of different tokens that can be represented by the inputs_ids passed when calling BloomModel. Check this discussion on how the vocab_size has been defined. hidden_size (int, optional, defaults to 64) — Dimensionality of the embeddings and hidden states. n_layer (int, optional, defaults to 2) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. apply_residual_connection_post_layernorm (bool, optional, defaults to False) — If enabled, use the layer norm of the hidden states as the residual in the transformer blocks hidden_dropout (float, optional, defaults to 0.1) — Dropout rate of the dropout function on the bias dropout. attention_dropout (float, optional, defaults to 0.1) — Dropout rate applied to the attention probs use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). pretraining_tp (int, optional, defaults to 1) — Experimental feature. Tensor parallelism rank used during pretraining with Megatron. Please refer to this document to understand more about it. This value is necessary to ensure exact reproducibility of the pretraining results. Please refer to this issue. Note also that this is enabled only when slow_but_exact=True. slow_but_exact (bool, optional, defaults to False) — Experimental feature. Whether to use slow but exact implementation of the attention mechanism. While merging the TP rank tensors, due to slicing operations the results may be slightly different between the model trained on Megatron and our model. Please refer to this issue. A solution to obtain more accurate results is to enable this feature. Enabling this will hurt the computational time of the inference. Will be probably resolved in the future once the main model has been fine-tuned with TP_rank=1. This is the configuration class to store the configuration of a BloomModel. It is used to instantiate a Bloom model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to the Bloom architecture bigscience/bloom. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import BloomConfig, BloomModel >>> >>> configuration = BloomConfig() >>> >>> model = BloomModel(configuration) >>> >>> configuration = model.config BloomModel class transformers.BloomModel < source > ( config: BloomConfig ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Bloom Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **deprecated_arguments ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BloomConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The BloomModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BloomModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") >>> model = BloomModel.from_pretrained("bigscience/bloom-560m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state BloomTokenizerFast class transformers.BloomTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<unk>' bos_token = '<s>' eos_token = '</s>' pad_token = '<pad>' add_prefix_space = False clean_up_tokenization_spaces = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Bloom tokenizer detect beginning of words by the preceding space). trim_offsets (bool, optional, defaults to True) — Whether or not the post-processing step should trim offsets to avoid including whitespaces. Construct a “fast” Bloom tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import BloomTokenizerFast >>> tokenizer = BloomTokenizerFast.from_pretrained("bigscience/bloom") >>> tokenizer("Hello world")["input_ids"] [59414, 8876] >>> tokenizer(" Hello world")["input_ids"] [86153, 8876] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. BloomForCausalLM class transformers.BloomForCausalLM < source > ( config: BloomConfig ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **deprecated_arguments ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BloomConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The BloomForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, BloomForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") >>> model = BloomForCausalLM.from_pretrained("bigscience/bloom-560m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits BloomForSequenceClassification class transformers.BloomForSequenceClassification < source > ( config: BloomConfig ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Bloom Model transformer with a sequence classification head on top (linear layer). BloomForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **deprecated_arguments ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BloomConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BloomForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, BloomForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") >>> model = BloomForSequenceClassification.from_pretrained("bigscience/bloom-560m") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = BloomForSequenceClassification.from_pretrained("bigscience/bloom-560m", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, BloomForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") >>> model = BloomForSequenceClassification.from_pretrained("bigscience/bloom-560m", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = BloomForSequenceClassification.from_pretrained( ... "bigscience/bloom-560m", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss BloomForTokenClassification class transformers.BloomForTokenClassification < source > ( config: BloomConfig ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Bloom Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **deprecated_arguments ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BloomConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The BloomForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, BloomForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom-560m") >>> model = BloomForTokenClassification.from_pretrained("bigscience/bloom-560m") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss BloomForQuestionAnswering class transformers.BloomForQuestionAnswering < source > ( config ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The BLOOM Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. The BloomForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. FlaxBloomModel class transformers.FlaxBloomModel < source > ( config: BloomConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The bare Bloom Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None past_key_values: dict = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using BloomTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BloomConfig) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBloomPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBloomModel >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") >>> model = FlaxBloomModel.from_pretrained("bigscience/bloom") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxBloomForCausalLM class transformers.FlaxBloomForCausalLM < source > ( config: BloomConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (BloomConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The Bloom Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None past_key_values: dict = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using BloomTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (BloomConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxBloomPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxBloomForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("bigscience/bloom") >>> model = FlaxBloomForCausalLM.from_pretrained("bigscience/bloom") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1]
https://huggingface.co/docs/transformers/philosophy
Philosophy 🤗 Transformers is an opinionated library built for: machine learning researchers and educators seeking to use, study or extend large-scale Transformers models. hands-on practitioners who want to fine-tune those models or serve them in production, or both. engineers who just want to download a pretrained model and use it to solve a given machine learning task. The library was designed with two strong goals in mind: Be as easy and fast to use as possible: We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions, just three standard classes required to use each model: configuration, models, and a preprocessing class (tokenizer for NLP, image processor for vision, feature extractor for audio, and processor for multimodal inputs). All of these classes can be initialized in a simple and unified way from pretrained instances by using a common from_pretrained() method which downloads (if needed), caches and loads the related class instance and associated data (configurations’ hyperparameters, tokenizers’ vocabulary, and models’ weights) from a pretrained checkpoint provided on Hugging Face Hub or your own saved checkpoint. On top of those three base classes, the library provides two APIs: pipeline() for quickly using a model for inference on a given task and Trainer to quickly train or fine-tune a PyTorch model (all TensorFlow models are compatible with Keras.fit). As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base classes of the library to reuse functionalities like model loading and saving. If you’d like to learn more about our coding philosophy for models, check out our Repeat Yourself blog post. Provide state-of-the-art models with performances as close as possible to the original models: We provide at least one example for each architecture which reproduces a result provided by the official authors of said architecture. The code is usually as close to the original code base as possible which means some PyTorch code may be not as pytorchic as it could be as a result of being converted TensorFlow code and vice versa. A few other goals: Expose the models’ internals as consistently as possible: We give access, using a single API, to the full hidden-states and attention weights. The preprocessing classes and base model APIs are standardized to easily switch between models. Incorporate a subjective selection of promising tools for fine-tuning and investigating these models: A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning. Simple ways to mask and prune Transformer heads. Easily switch between PyTorch, TensorFlow 2.0 and Flax, allowing training with one framework and inference with another. Main concepts The library is built around three types of classes for each model: Model classes can be PyTorch models (torch.nn.Module), Keras models (tf.keras.Model) or JAX/Flax models (flax.linen.Module) that work with the pretrained weights provided in the library. Configuration classes store the hyperparameters required to build a model (such as the number of layers and hidden size). You don’t always need to instantiate these yourself. In particular, if you are using a pretrained model without any modification, creating the model will automatically take care of instantiating the configuration (which is part of the model). Preprocessing classes convert the raw data into a format accepted by the model. A tokenizer stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embedding indices to be fed to a model. Image processors preprocess vision inputs, feature extractors preprocess audio inputs, and a processor handles multimodal inputs. All these classes can be instantiated from pretrained instances, saved locally, and shared on the Hub with three methods: from_pretrained() lets you instantiate a model, configuration, and preprocessing class from a pretrained version either provided by the library itself (the supported models can be found on the Model Hub) or stored locally (or on a server) by the user. save_pretrained() lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using from_pretrained(). push_to_hub() lets you share a model, configuration, and a preprocessing class to the Hub, so it is easily accessible to everyone.
https://huggingface.co/docs/transformers/pr_checks
Checks on a Pull Request When you open a pull request on 🤗 Transformers, a fair number of checks will be run to make sure the patch you are adding is not breaking anything existing. Those checks are of four types: regular tests documentation build code and documentation style general repository consistency In this document, we will take a stab at explaining what those various checks are and the reason behind them, as well as how to debug them locally if one of them fails on your PR. Note that, ideally, they require you to have a dev install: pip install transformers[dev] or for an editable install: inside the Transformers repo. Since the number of optional dependencies of Transformers has grown a lot, it’s possible you don’t manage to get all of them. If the dev install fails, make sure to install the Deep Learning framework you are working with (PyTorch, TensorFlow and/or Flax) then do pip install transformers[quality] or for an editable install: pip install -e .[quality] Tests All the jobs that begin with ci/circleci: run_tests_ run parts of the Transformers testing suite. Each of those jobs focuses on a part of the library in a certain environment: for instance ci/circleci: run_tests_pipelines_tf runs the pipelines test in an environment where TensorFlow only is installed. Note that to avoid running tests when there is no real change in the modules they are testing, only part of the test suite is run each time: a utility is run to determine the differences in the library between before and after the PR (what GitHub shows you in the “Files changes” tab) and picks the tests impacted by that diff. That utility can be run locally with: python utils/tests_fetcher.py from the root of the Transformers repo. It will: Check for each file in the diff if the changes are in the code or only in comments or docstrings. Only the files with real code changes are kept. Build an internal map that gives for each file of the source code of the library all the files it recursively impacts. Module A is said to impact module B if module B imports module A. For the recursive impact, we need a chain of modules going from module A to module B in which each module imports the previous one. Apply this map on the files gathered in step 1, which gives us the list of model files impacted by the PR. Map each of those files to their corresponding test file(s) and get the list of tests to run. When executing the script locally, you should get the results of step 1, 3 and 4 printed and thus know which tests are run. The script will also create a file named test_list.txt which contains the list of tests to run, and you can run them locally with the following command: python -m pytest -n 8 --dist=loadfile -rA -s $(cat test_list.txt) Just in case anything slipped through the cracks, the full test suite is also run daily. Documentation build The build_pr_documentation job builds and generates a preview of the documentation to make sure everything looks okay once your PR is merged. A bot will add a link to preview the documentation in your PR. Any changes you make to the PR are automatically updated in the preview. If the documentation fails to build, click on Details next to the failed job to see where things went wrong. Often, the error is as simple as a missing file in the toctree. If you’re interested in building or previewing the documentation locally, take a look at the README.md in the docs folder. Code and documentation style Code formatting is applied to all the source files, the examples and the tests using black and ruff. We also have a custom tool taking care of the formatting of docstrings and rst files (utils/style_doc.py), as well as the order of the lazy imports performed in the Transformers __init__.py files (utils/custom_init_isort.py). All of this can be launched by executing The CI checks those have been applied inside the ci/circleci: check_code_quality check. It also runs ruff, that will have a basic look at your code and will complain if it finds an undefined variable, or one that is not used. To run that check locally, use This can take a lot of time, so to run the same thing on only the files you modified in the current branch, run This last command will also run all the additional checks for the repository consistency. Let’s have a look at them. Repository consistency This regroups all the tests to make sure your PR leaves the repository in a good state, and is performed by the ci/circleci: check_repository_consistency check. You can locally run that check by executing the following: This checks that: All objects added to the init are documented (performed by utils/check_repo.py) All __init__.py files have the same content in their two sections (performed by utils/check_inits.py) All code identified as a copy from another module is consistent with the original (performed by utils/check_copies.py) All configuration classes have at least one valid checkpoint mentioned in their docstrings (performed by utils/check_config_docstrings.py) All configuration classes only contain attributes that are used in corresponding modeling files (performed by utils/check_config_attributes.py) The translations of the READMEs and the index of the doc have the same model list as the main README (performed by utils/check_copies.py) The auto-generated tables in the documentation are up to date (performed by utils/check_table.py) The library has all objects available even if not all optional dependencies are installed (performed by utils/check_dummies.py) Should this check fail, the first two items require manual fixing, the last four can be fixed automatically for you by running the command Additional checks concern PRs that add new models, mainly that: All models added are in an Auto-mapping (performed by utils/check_repo.py) All models are properly tested (performed by utils/check_repo.py) Check copies Since the Transformers library is very opinionated with respect to model code, and each model should fully be implemented in a single file without relying on other models, we have added a mechanism that checks whether a copy of the code of a layer of a given model stays consistent with the original. This way, when there is a bug fix, we can see all other impacted models and choose to trickle down the modification or break the copy. If a file is a full copy of another file, you should register it in the constant FULL_COPIES of utils/check_copies.py. This mechanism relies on comments of the form # Copied from xxx. The xxx should contain the whole path to the class of function which is being copied below. For instance, RobertaSelfOutput is a direct copy of the BertSelfOutput class, so you can see here it has a comment: Note that instead of applying this to a whole class, you can apply it to the relevant methods that are copied from. For instance here you can see how RobertaPreTrainedModel._init_weights is copied from the same method in BertPreTrainedModel with the comment: Sometimes the copy is exactly the same except for names: for instance in RobertaAttention, we use RobertaSelfAttention insted of BertSelfAttention but other than that, the code is exactly the same. This is why # Copied from supports simple string replacements with the follwoing syntax: Copied from xxx with foo->bar. This means the code is copied with all instances of foo being replaced by bar. You can see how it used here in RobertaAttention with the comment: Note that there shouldn’t be any spaces around the arrow (unless that space is part of the pattern to replace of course). You can add several patterns separated by a comma. For instance here CamemberForMaskedLM is a direct copy of RobertaForMaskedLM with two replacements: Roberta to Camembert and ROBERTA to CAMEMBERT. You can see here this is done with the comment: If the order matters (because one of the replacements might conflict with a previous one), the replacements are executed from left to right. If the replacements change the formatting (if you replace a short name by a very long name for instance), the copy is checked after applying the auto-formatter. Another way when the patterns are just different casings of the same replacement (with an uppercased and a lowercased variants) is just to add the option all-casing. Here is an example in MobileBertForSequenceClassification with the comment: In this case, the code is copied from BertForSequenceClassification by replacing: Bert by MobileBert (for instance when using MobileBertModel in the init) bert by mobilebert (for instance when defining self.mobilebert) BERT by MOBILEBERT (in the constant MOBILEBERT_INPUTS_DOCSTRING)
https://huggingface.co/docs/transformers/model_doc/clipseg
CLIPSeg Overview The CLIPSeg model was proposed in Image Segmentation Using Text and Image Prompts by Timo Lüddecke and Alexander Ecker. CLIPSeg adds a minimal decoder on top of a frozen CLIP model for zero- and one-shot image segmentation. The abstract from the paper is the following: Image segmentation is usually addressed by training a model for a fixed set of object classes. Incorporating additional classes or more complex queries later is expensive as it requires re-training the model on a dataset that encompasses these expressions. Here we propose a system that can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text or an image. This approach enables us to create a unified model (trained once) for three common segmentation tasks, which come with distinct challenges: referring expression segmentation, zero-shot segmentation and one-shot segmentation. We build upon the CLIP model as a backbone which we extend with a transformer-based decoder that enables dense prediction. After training on an extended version of the PhraseCut dataset, our system generates a binary segmentation map for an image based on a free-text prompt or on an additional image expressing the query. We analyze different variants of the latter image-based prompts in detail. This novel hybrid input allows for dynamic adaptation not only to the three segmentation tasks mentioned above, but to any binary segmentation task where a text or image query can be formulated. Finally, we find our system to adapt well to generalized queries involving affordances or properties Tips: CLIPSegForImageSegmentation adds a decoder on top of CLIPSegModel. The latter is identical to CLIPModel. CLIPSegForImageSegmentation can generate image segmentations based on arbitrary prompts at test time. A prompt can be either a text (provided to the model as input_ids) or an image (provided to the model as conditional_pixel_values). One can also provide custom conditional embeddings (provided to the model as conditional_embeddings). CLIPSeg overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIPSeg. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Image Segmentation A notebook that illustrates zero-shot image segmentation with CLIPSeg. CLIPSegConfig class transformers.CLIPSegConfig < source > ( text_config = None vision_config = None projection_dim = 512 logit_scale_init_value = 2.6592 extract_layers = [3, 6, 9] reduce_dim = 64 decoder_num_attention_heads = 4 decoder_attention_dropout = 0.0 decoder_hidden_act = 'quick_gelu' decoder_intermediate_size = 2048 conditional_layer = 0 use_complex_transposed_convolution = False **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize CLIPSegTextConfig. vision_config (dict, optional) — Dictionary of configuration options used to initialize CLIPSegVisionConfig. projection_dim (int, optional, defaults to 512) — Dimensionality of text and vision projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLIPSeg implementation. extract_layers (List[int], optional, defaults to [3, 6, 9]) — Layers to extract when forwarding the query image through the frozen visual backbone of CLIP. reduce_dim (int, optional, defaults to 64) — Dimensionality to reduce the CLIP vision embedding. decoder_num_attention_heads (int, optional, defaults to 4) — Number of attention heads in the decoder of CLIPSeg. decoder_attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. decoder_hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. decoder_intermediate_size (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layers in the Transformer decoder. conditional_layer (int, optional, defaults to 0) — The layer to use of the Transformer encoder whose activations will be combined with the condition embeddings using FiLM (Feature-wise Linear Modulation). If 0, the last layer is used. use_complex_transposed_convolution (bool, optional, defaults to False) — Whether to use a more complex transposed convolution in the decoder, enabling more fine-grained segmentation. kwargs (optional) — Dictionary of keyword arguments. CLIPSegConfig is the configuration class to store the configuration of a CLIPSegModel. It is used to instantiate a CLIPSeg model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg CIDAS/clipseg-rd64 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CLIPSegConfig, CLIPSegModel >>> >>> configuration = CLIPSegConfig() >>> >>> model = CLIPSegModel(configuration) >>> >>> configuration = model.config >>> >>> >>> config_text = CLIPSegTextConfig() >>> config_vision = CLIPSegVisionConfig() >>> config = CLIPSegConfig.from_text_vision_configs(config_text, config_vision) from_text_vision_configs < source > ( text_config: CLIPSegTextConfig vision_config: CLIPSegVisionConfig **kwargs ) → CLIPSegConfig An instance of a configuration object Instantiate a CLIPSegConfig (or a derived class) from clipseg text model configuration and clipseg vision model configuration. CLIPSegTextConfig class transformers.CLIPSegTextConfig < source > ( vocab_size = 49408 hidden_size = 512 intermediate_size = 2048 num_hidden_layers = 12 num_attention_heads = 8 max_position_embeddings = 77 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 pad_token_id = 1 bos_token_id = 49406 eos_token_id = 49407 **kwargs ) Parameters vocab_size (int, optional, defaults to 49408) — Vocabulary size of the CLIPSeg text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling CLIPSegModel. hidden_size (int, optional, defaults to 512) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (int, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a CLIPSegModel. It is used to instantiate an CLIPSeg model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg CIDAS/clipseg-rd64 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CLIPSegTextConfig, CLIPSegTextModel >>> >>> configuration = CLIPSegTextConfig() >>> >>> model = CLIPSegTextModel(configuration) >>> >>> configuration = model.config CLIPSegVisionConfig class transformers.CLIPSegVisionConfig < source > ( hidden_size = 768 intermediate_size = 3072 num_hidden_layers = 12 num_attention_heads = 12 num_channels = 3 image_size = 224 patch_size = 32 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 32) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a CLIPSegModel. It is used to instantiate an CLIPSeg model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIPSeg CIDAS/clipseg-rd64 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CLIPSegVisionConfig, CLIPSegVisionModel >>> >>> configuration = CLIPSegVisionConfig() >>> >>> model = CLIPSegVisionModel(configuration) >>> >>> configuration = model.config CLIPSegProcessor class transformers.CLIPSegProcessor < source > ( image_processor = None tokenizer = None **kwargs ) Parameters image_processor (ViTImageProcessor) — The image processor is a required input. tokenizer (CLIPTokenizerFast) — The tokenizer is a required input. Constructs a CLIPSeg processor which wraps a CLIPSeg image processor and a CLIP tokenizer into a single processor. CLIPSegProcessor offers all the functionalities of ViTImageProcessor and CLIPTokenizerFast. See the __call__() and decode() for more information. This method forwards all its arguments to CLIPTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to CLIPTokenizerFast’s decode(). Please refer to the docstring of this method for more information. CLIPSegModel class transformers.CLIPSegModel < source > ( config: CLIPSegConfig ) Parameters config (CLIPSegConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clipseg.modeling_clipseg.CLIPSegOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clipseg.modeling_clipseg.CLIPSegOutput or tuple(torch.FloatTensor) A transformers.models.clipseg.modeling_clipseg.CLIPSegOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of CLIPSegTextModel. image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of CLIPSegVisionModel. text_model_output(BaseModelOutputWithPooling): The output of the CLIPSegTextModel. vision_model_output(BaseModelOutputWithPooling): The output of the CLIPSegVisionModel. The CLIPSegModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPSegModel >>> processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") >>> model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (torch.FloatTensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of CLIPSegTextModel. The CLIPSegModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, CLIPSegModel >>> tokenizer = AutoTokenizer.from_pretrained("CIDAS/clipseg-rd64-refined") >>> model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns image_features (torch.FloatTensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of CLIPSegVisionModel. The CLIPSegModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPSegModel >>> processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") >>> model = CLIPSegModel.from_pretrained("CIDAS/clipseg-rd64-refined") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> image_features = model.get_image_features(**inputs) CLIPSegTextModel class transformers.CLIPSegTextModel < source > ( config: CLIPSegTextConfig ) forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegTextConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CLIPSegTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, CLIPSegTextModel >>> tokenizer = AutoTokenizer.from_pretrained("CIDAS/clipseg-rd64-refined") >>> model = CLIPSegTextModel.from_pretrained("CIDAS/clipseg-rd64-refined") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output CLIPSegVisionModel class transformers.CLIPSegVisionModel < source > ( config: CLIPSegVisionConfig ) forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegVisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CLIPSegVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPSegVisionModel >>> processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") >>> model = CLIPSegVisionModel.from_pretrained("CIDAS/clipseg-rd64-refined") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output CLIPSegForImageSegmentation class transformers.CLIPSegForImageSegmentation < source > ( config: CLIPSegConfig ) Parameters config (CLIPSegConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CLIPSeg model with a Transformer-based decoder on top for zero-shot and one-shot image segmentation. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.FloatTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None conditional_pixel_values: typing.Optional[torch.FloatTensor] = None conditional_embeddings: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput or tuple(torch.FloatTensor) A transformers.models.clipseg.modeling_clipseg.CLIPSegImageSegmentationOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clipseg.configuration_clipseg.CLIPSegTextConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. … vision_model_output (BaseModelOutputWithPooling) — The output of the CLIPSegVisionModel. The CLIPSegForImageSegmentation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoProcessor, CLIPSegForImageSegmentation >>> from PIL import Image >>> import requests >>> processor = AutoProcessor.from_pretrained("CIDAS/clipseg-rd64-refined") >>> model = CLIPSegForImageSegmentation.from_pretrained("CIDAS/clipseg-rd64-refined") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> texts = ["a cat", "a remote", "a blanket"] >>> inputs = processor(text=texts, images=[image] * len(texts), padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> print(logits.shape) torch.Size([3, 352, 352])
https://huggingface.co/docs/transformers/task_summary
What 🤗 Transformers can do 🤗 Transformers is a library of pretrained state-of-the-art models for natural language processing (NLP), computer vision, and audio and speech processing tasks. Not only does the library contain Transformer models, but it also has non-Transformer models like modern convolutional networks for computer vision tasks. If you look at some of the most popular consumer products today, like smartphones, apps, and televisions, odds are that some kind of deep learning technology is behind it. Want to remove a background object from a picture taken by your smartphone? This is an example of a panoptic segmentation task (don’t worry if you don’t know what this means yet, we’ll describe it in the following sections!). This page provides an overview of the different speech and audio, computer vision, and NLP tasks that can be solved with the 🤗 Transformers library in just three lines of code! Audio Audio and speech processing tasks are a little different from the other modalities mainly because audio as an input is a continuous signal. Unlike text, a raw audio waveform can’t be neatly split into discrete chunks the way a sentence can be divided into words. To get around this, the raw audio signal is typically sampled at regular intervals. If you take more samples within an interval, the sampling rate is higher, and the audio more closely resembles the original audio source. Previous approaches preprocessed the audio to extract useful features from it. It is now more common to start audio and speech processing tasks by directly feeding the raw audio waveform to a feature encoder to extract an audio representation. This simplifies the preprocessing step and allows the model to learn the most essential features. Audio classification Audio classification is a task that labels audio data from a predefined set of classes. It is a broad category with many specific applications, some of which include: acoustic scene classification: label audio with a scene label (“office”, “beach”, “stadium”) acoustic event detection: label audio with a sound event label (“car horn”, “whale calling”, “glass breaking”) tagging: label audio containing multiple sounds (birdsongs, speaker identification in a meeting) music classification: label music with a genre label (“metal”, “hip-hop”, “country”) >>> from transformers import pipeline >>> classifier = pipeline(task="audio-classification", model="superb/hubert-base-superb-er") >>> preds = classifier("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.4532, 'label': 'hap'}, {'score': 0.3622, 'label': 'sad'}, {'score': 0.0943, 'label': 'neu'}, {'score': 0.0903, 'label': 'ang'}] Automatic speech recognition Automatic speech recognition (ASR) transcribes speech into text. It is one of the most common audio tasks due partly to speech being such a natural form of human communication. Today, ASR systems are embedded in “smart” technology products like speakers, phones, and cars. We can ask our virtual assistants to play music, set reminders, and tell us the weather. But one of the key challenges Transformer architectures have helped with is in low-resource languages. By pretraining on large amounts of speech data, finetuning the model on only one hour of labeled speech data in a low-resource language can still produce high-quality results compared to previous ASR systems trained on 100x more labeled data. >>> from transformers import pipeline >>> transcriber = pipeline(task="automatic-speech-recognition", model="openai/whisper-small") >>> transcriber("https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac") {'text': ' I have a dream that one day this nation will rise up and live out the true meaning of its creed.'} Computer vision One of the first and earliest successful computer vision tasks was recognizing images of zip code numbers using a convolutional neural network (CNN). An image is composed of pixels, and each pixel has a numerical value. This makes it easy to represent an image as a matrix of pixel values. Each particular combination of pixel values describes the colors of an image. Two general ways computer vision tasks can be solved are: Use convolutions to learn the hierarchical features of an image from low-level features to high-level abstract things. Split an image into patches and use a Transformer to gradually learn how each image patch is related to each other to form an image. Unlike the bottom-up approach favored by a CNN, this is kind of like starting out with a blurry image and then gradually bringing it into focus. Image classification Image classification labels an entire image from a predefined set of classes. Like most classification tasks, there are many practical use cases for image classification, some of which include: healthcare: label medical images to detect disease or monitor patient health environment: label satellite images to monitor deforestation, inform wildland management or detect wildfires agriculture: label images of crops to monitor plant health or satellite images for land use monitoring ecology: label images of animal or plant species to monitor wildlife populations or track endangered species >>> from transformers import pipeline >>> classifier = pipeline(task="image-classification") >>> preds = classifier( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.4335, 'label': 'lynx, catamount'} {'score': 0.0348, 'label': 'cougar, puma, catamount, mountain lion, painter, panther, Felis concolor'} {'score': 0.0324, 'label': 'snow leopard, ounce, Panthera uncia'} {'score': 0.0239, 'label': 'Egyptian cat'} {'score': 0.0229, 'label': 'tiger cat'} Object detection Unlike image classification, object detection identifies multiple objects within an image and the objects’ positions in an image (defined by the bounding box). Some example applications of object detection include: self-driving vehicles: detect everyday traffic objects such as other vehicles, pedestrians, and traffic lights remote sensing: disaster monitoring, urban planning, and weather forecasting defect detection: detect cracks or structural damage in buildings, and manufacturing defects >>> from transformers import pipeline >>> detector = pipeline(task="object-detection") >>> preds = detector( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"], "box": pred["box"]} for pred in preds] >>> preds [{'score': 0.9865, 'label': 'cat', 'box': {'xmin': 178, 'ymin': 154, 'xmax': 882, 'ymax': 598}}] Image segmentation Image segmentation is a pixel-level task that assigns every pixel in an image to a class. It differs from object detection, which uses bounding boxes to label and predict objects in an image because segmentation is more granular. Segmentation can detect objects at a pixel-level. There are several types of image segmentation: instance segmentation: in addition to labeling the class of an object, it also labels each distinct instance of an object (“dog-1”, “dog-2”) panoptic segmentation: a combination of semantic and instance segmentation; it labels each pixel with a semantic class and each distinct instance of an object Segmentation tasks are helpful in self-driving vehicles to create a pixel-level map of the world around them so they can navigate safely around pedestrians and other vehicles. It is also useful for medical imaging, where the task’s finer granularity can help identify abnormal cells or organ features. Image segmentation can also be used in ecommerce to virtually try on clothes or create augmented reality experiences by overlaying objects in the real world through your camera. >>> from transformers import pipeline >>> segmenter = pipeline(task="image-segmentation") >>> preds = segmenter( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> print(*preds, sep="\n") {'score': 0.9879, 'label': 'LABEL_184'} {'score': 0.9973, 'label': 'snow'} {'score': 0.9972, 'label': 'cat'} Depth estimation Depth estimation predicts the distance of each pixel in an image from the camera. This computer vision task is especially important for scene understanding and reconstruction. For example, in self-driving cars, vehicles need to understand how far objects like pedestrians, traffic signs, and other vehicles are to avoid obstacles and collisions. Depth information is also helpful for constructing 3D representations from 2D images and can be used to create high-quality 3D representations of biological structures or buildings. There are two approaches to depth estimation: stereo: depths are estimated by comparing two images of the same image from slightly different angles monocular: depths are estimated from a single image >>> from transformers import pipeline >>> depth_estimator = pipeline(task="depth-estimation") >>> preds = depth_estimator( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg" ... ) Natural language processing NLP tasks are among the most common types of tasks because text is such a natural way for us to communicate. To get text into a format recognized by a model, it needs to be tokenized. This means dividing a sequence of text into separate words or subwords (tokens) and then converting these tokens into numbers. As a result, you can represent a sequence of text as a sequence of numbers, and once you have a sequence of numbers, it can be input into a model to solve all sorts of NLP tasks! Text classification Like classification tasks in any modality, text classification labels a sequence of text (it can be sentence-level, a paragraph, or a document) from a predefined set of classes. There are many practical applications for text classification, some of which include: sentiment analysis: label text according to some polarity like positive or negative which can inform and support decision-making in fields like politics, finance, and marketing content classification: label text according to some topic to help organize and filter information in news and social media feeds (weather, sports, finance, etc.) >>> from transformers import pipeline >>> classifier = pipeline(task="sentiment-analysis") >>> preds = classifier("Hugging Face is the best thing since sliced bread!") >>> preds = [{"score": round(pred["score"], 4), "label": pred["label"]} for pred in preds] >>> preds [{'score': 0.9991, 'label': 'POSITIVE'}] Token classification In any NLP task, text is preprocessed by separating the sequence of text into individual words or subwords. These are known as tokens. Token classification assigns each token a label from a predefined set of classes. Two common types of token classification are: named entity recognition (NER): label a token according to an entity category like organization, person, location or date. NER is especially popular in biomedical settings, where it can label genes, proteins, and drug names. part-of-speech tagging (POS): label a token according to its part-of-speech like noun, verb, or adjective. POS is useful for helping translation systems understand how two identical words are grammatically different (bank as a noun versus bank as a verb). >>> from transformers import pipeline >>> classifier = pipeline(task="ner") >>> preds = classifier("Hugging Face is a French company based in New York City.") >>> preds = [ ... { ... "entity": pred["entity"], ... "score": round(pred["score"], 4), ... "index": pred["index"], ... "word": pred["word"], ... "start": pred["start"], ... "end": pred["end"], ... } ... for pred in preds ... ] >>> print(*preds, sep="\n") {'entity': 'I-ORG', 'score': 0.9968, 'index': 1, 'word': 'Hu', 'start': 0, 'end': 2} {'entity': 'I-ORG', 'score': 0.9293, 'index': 2, 'word': '##gging', 'start': 2, 'end': 7} {'entity': 'I-ORG', 'score': 0.9763, 'index': 3, 'word': 'Face', 'start': 8, 'end': 12} {'entity': 'I-MISC', 'score': 0.9983, 'index': 6, 'word': 'French', 'start': 18, 'end': 24} {'entity': 'I-LOC', 'score': 0.999, 'index': 10, 'word': 'New', 'start': 42, 'end': 45} {'entity': 'I-LOC', 'score': 0.9987, 'index': 11, 'word': 'York', 'start': 46, 'end': 50} {'entity': 'I-LOC', 'score': 0.9992, 'index': 12, 'word': 'City', 'start': 51, 'end': 55} Question answering Question answering is another token-level task that returns an answer to a question, sometimes with context (open-domain) and other times without context (closed-domain). This task happens whenever we ask a virtual assistant something like whether a restaurant is open. It can also provide customer or technical support and help search engines retrieve the relevant information you’re asking for. There are two common types of question answering: extractive: given a question and some context, the answer is a span of text from the context the model must extract abstractive: given a question and some context, the answer is generated from the context; this approach is handled by the Text2TextGenerationPipeline instead of the QuestionAnsweringPipeline shown below >>> from transformers import pipeline >>> question_answerer = pipeline(task="question-answering") >>> preds = question_answerer( ... question="What is the name of the repository?", ... context="The name of the repository is huggingface/transformers", ... ) >>> print( ... f"score: {round(preds['score'], 4)}, start: {preds['start']}, end: {preds['end']}, answer: {preds['answer']}" ... ) score: 0.9327, start: 30, end: 54, answer: huggingface/transformers Summarization Summarization creates a shorter version of a text from a longer one while trying to preserve most of the meaning of the original document. Summarization is a sequence-to-sequence task; it outputs a shorter text sequence than the input. There are a lot of long-form documents that can be summarized to help readers quickly understand the main points. Legislative bills, legal and financial documents, patents, and scientific papers are a few examples of documents that could be summarized to save readers time and serve as a reading aid. Like question answering, there are two types of summarization: extractive: identify and extract the most important sentences from the original text abstractive: generate the target summary (which may include new words not in the input document) from the original text; the SummarizationPipeline uses the abstractive approach >>> from transformers import pipeline >>> summarizer = pipeline(task="summarization") >>> summarizer( ... "In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. On both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks, we achieve a new state of the art. In the former task our best model outperforms even all previously reported ensembles." ... ) [{'summary_text': ' The Transformer is the first sequence transduction model based entirely on attention . It replaces the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention . For translation tasks, the Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers .'}] Translation Translation converts a sequence of text in one language to another. It is important in helping people from different backgrounds communicate with each other, help translate content to reach wider audiences, and even be a learning tool to help people learn a new language. Along with summarization, translation is a sequence-to-sequence task, meaning the model receives an input sequence and returns a target output sequence. In the early days, translation models were mostly monolingual, but recently, there has been increasing interest in multilingual models that can translate between many pairs of languages. >>> from transformers import pipeline >>> text = "translate English to French: Hugging Face is a community-based open-source platform for machine learning." >>> translator = pipeline(task="translation", model="t5-small") >>> translator(text) [{'translation_text': "Hugging Face est une tribune communautaire de l'apprentissage des machines."}] Language modeling Language modeling is a task that predicts a word in a sequence of text. It has become a very popular NLP task because a pretrained language model can be finetuned for many other downstream tasks. Lately, there has been a lot of interest in large language models (LLMs) which demonstrate zero- or few-shot learning. This means the model can solve tasks it wasn’t explicitly trained to do! Language models can be used to generate fluent and convincing text, though you need to be careful since the text may not always be accurate. There are two types of language modeling: causal: the model’s objective is to predict the next token in a sequence, and future tokens are masked >>> from transformers import pipeline >>> prompt = "Hugging Face is a community-based open-source platform for machine learning." >>> generator = pipeline(task="text-generation") >>> generator(prompt) masked: the model’s objective is to predict a masked token in a sequence with full access to the tokens in the sequence >>> text = "Hugging Face is a community-based open-source <mask> for machine learning." >>> fill_mask = pipeline(task="fill-mask") >>> preds = fill_mask(text, top_k=1) >>> preds = [ ... { ... "score": round(pred["score"], 4), ... "token": pred["token"], ... "token_str": pred["token_str"], ... "sequence": pred["sequence"], ... } ... for pred in preds ... ] >>> preds [{'score': 0.2236, 'token': 1761, 'token_str': ' platform', 'sequence': 'Hugging Face is a community-based open-source platform for machine learning.'}] Multimodal Multimodal tasks require a model to process multiple data modalities (text, image, audio, video) to solve a particular problem. Image captioning is an example of a multimodal task where the model takes an image as input and outputs a sequence of text describing the image or some properties of the image. Although multimodal models work with different data types or modalities, internally, the preprocessing steps help the model convert all the data types into embeddings (vectors or list of numbers that holds meaningful information about the data). For a task like image captioning, the model learns relationships between image embeddings and text embeddings. Document question answering Document question answering is a task that answers natural language questions from a document. Unlike a token-level question answering task which takes text as input, document question answering takes an image of a document as input along with a question about the document and returns an answer. Document question answering can be used to parse structured documents and extract key information from it. In the example below, the total amount and change due can be extracted from a receipt. >>> from transformers import pipeline >>> from PIL import Image >>> import requests >>> url = "https://datasets-server.huggingface.co/assets/hf-internal-testing/example-documents/--/hf-internal-testing--example-documents/test/2/image/image.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> doc_question_answerer = pipeline("document-question-answering", model="magorshunov/layoutlm-invoices") >>> preds = doc_question_answerer( ... question="What is the total amount?", ... image=image, ... ) >>> preds [{'score': 0.8531, 'answer': '17,000', 'start': 4, 'end': 4}] Hopefully, this page has given you some more background information about all the types of tasks in each modality and the practical importance of each one. In the next section, you’ll learn how 🤗 Transformers work to solve these tasks.
https://huggingface.co/docs/transformers/glossary
Glossary This glossary defines general machine learning and 🤗 Transformers terms to help you better understand the documentation. A attention mask The attention mask is an optional argument used when batching sequences together. This argument indicates to the model which tokens should be attended to, and which should not. For example, consider these two sequences: >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence_a = "This is a short sequence." >>> sequence_b = "This is a rather long sequence. It is at least longer than the sequence A." >>> encoded_sequence_a = tokenizer(sequence_a)["input_ids"] >>> encoded_sequence_b = tokenizer(sequence_b)["input_ids"] The encoded versions have different lengths: >>> len(encoded_sequence_a), len(encoded_sequence_b) (8, 19) Therefore, we can’t put them together in the same tensor as-is. The first sequence needs to be padded up to the length of the second one, or the second one needs to be truncated down to the length of the first one. In the first case, the list of IDs will be extended by the padding indices. We can pass a list to the tokenizer and ask it to pad like this: >>> padded_sequences = tokenizer([sequence_a, sequence_b], padding=True) We can see that 0s have been added on the right of the first sentence to make it the same length as the second one: >>> padded_sequences["input_ids"] [[101, 1188, 1110, 170, 1603, 4954, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [101, 1188, 1110, 170, 1897, 1263, 4954, 119, 1135, 1110, 1120, 1655, 2039, 1190, 1103, 4954, 138, 119, 102]] This can then be converted into a tensor in PyTorch or TensorFlow. The attention mask is a binary tensor indicating the position of the padded indices so that the model does not attend to them. For the BertTokenizer, 1 indicates a value that should be attended to, while 0 indicates a padded value. This attention mask is in the dictionary returned by the tokenizer under the key “attention_mask”: >>> padded_sequences["attention_mask"] [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]] autoencoding models See encoder models and masked language modeling autoregressive models See causal language modeling and decoder models B backbone The backbone is the network (embeddings and layers) that outputs the raw hidden states or features. It is usually connected to a head which accepts the features as its input to make a prediction. For example, ViTModel is a backbone without a specific head on top. Other models can also use VitModel as a backbone such as DPT. C causal language modeling A pretraining task where the model reads the texts in order and has to predict the next word. It’s usually done by reading the whole sentence but using a mask inside the model to hide the future tokens at a certain timestep. channel Color images are made up of some combination of values in three channels - red, green, and blue (RGB) - and grayscale images only have one channel. In 🤗 Transformers, the channel can be the first or last dimension of an image’s tensor: [n_channels, height, width] or [height, width, n_channels]. connectionist temporal classification (CTC) An algorithm which allows a model to learn without knowing exactly how the input and output are aligned; CTC calculates the distribution of all possible outputs for a given input and chooses the most likely output from it. CTC is commonly used in speech recognition tasks because speech doesn’t always cleanly align with the transcript for a variety of reasons such as a speaker’s different speech rates. convolution A type of layer in a neural network where the input matrix is multiplied element-wise by a smaller matrix (kernel or filter) and the values are summed up in a new matrix. This is known as a convolutional operation which is repeated over the entire input matrix. Each operation is applied to a different segment of the input matrix. Convolutional neural networks (CNNs) are commonly used in computer vision. D decoder input IDs This input is specific to encoder-decoder models, and contains the input IDs that will be fed to the decoder. These inputs should be used for sequence to sequence tasks, such as translation or summarization, and are usually built in a way specific to each model. Most encoder-decoder models (BART, T5) create their decoder_input_ids on their own from the labels. In such models, passing the labels is the preferred way to handle training. Please check each model’s docs to see how they handle these input IDs for sequence to sequence training. decoder models Also referred to as autoregressive models, decoder models involve a pretraining task (called causal language modeling) where the model reads the texts in order and has to predict the next word. It’s usually done by reading the whole sentence with a mask to hide future tokens at a certain timestep. deep learning (DL) Machine learning algorithms which uses neural networks with several layers. E encoder models Also known as autoencoding models, encoder models take an input (such as text or images) and transform them into a condensed numerical representation called an embedding. Oftentimes, encoder models are pretrained using techniques like masked language modeling, which masks parts of the input sequence and forces the model to create more meaningful representations. F feature extraction The process of selecting and transforming raw data into a set of features that are more informative and useful for machine learning algorithms. Some examples of feature extraction include transforming raw text into word embeddings and extracting important features such as edges or shapes from image/video data. feed forward chunking In each residual attention block in transformers the self-attention layer is usually followed by 2 feed forward layers. The intermediate embedding size of the feed forward layers is often bigger than the hidden size of the model (e.g., for bert-base-uncased). For an input of size [batch_size, sequence_length], the memory required to store the intermediate feed forward embeddings [batch_size, sequence_length, config.intermediate_size] can account for a large fraction of the memory use. The authors of Reformer: The Efficient Transformer noticed that since the computation is independent of the sequence_length dimension, it is mathematically equivalent to compute the output embeddings of both feed forward layers [batch_size, config.hidden_size]_0, ..., [batch_size, config.hidden_size]_n individually and concat them afterward to [batch_size, sequence_length, config.hidden_size] with n = sequence_length, which trades increased computation time against reduced memory use, but yields a mathematically equivalent result. For models employing the function apply_chunking_to_forward(), the chunk_size defines the number of output embeddings that are computed in parallel and thus defines the trade-off between memory and time complexity. If chunk_size is set to 0, no feed forward chunking is done. finetuned models Finetuning is a form of transfer learning which involves taking a pretrained model, freezing its weights, and replacing the output layer with a newly added model head. The model head is trained on your target dataset. See the Fine-tune a pretrained model tutorial for more details, and learn how to fine-tune models with 🤗 Transformers. H head The model head refers to the last layer of a neural network that accepts the raw hidden states and projects them onto a different dimension. There is a different model head for each task. For example: GPT2ForSequenceClassification is a sequence classification head - a linear layer - on top of the base GPT2Model. ViTForImageClassification is an image classification head - a linear layer on top of the final hidden state of the CLS token - on top of the base ViTModel. Wav2Vec2ForCTC ia a language modeling head with CTC on top of the base Wav2Vec2Model. I image patch Vision-based Transformers models split an image into smaller patches which are linearly embedded, and then passed as a sequence to the model. You can find the patch_size - or resolution - of the model in its configuration. inference Inference is the process of evaluating a model on new data after training is complete. See the Pipeline for inference tutorial to learn how to perform inference with 🤗 Transformers. input IDs The input ids are often the only required parameters to be passed to the model as input. They are token indices, numerical representations of tokens building the sequences that will be used as input by the model. Each tokenizer works differently but the underlying mechanism remains the same. Here’s an example using the BERT tokenizer, which is a WordPiece tokenizer: >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence = "A Titan RTX has 24GB of VRAM" The tokenizer takes care of splitting the sequence into tokens available in the tokenizer vocabulary. >>> tokenized_sequence = tokenizer.tokenize(sequence) The tokens are either words or subwords. Here for instance, “VRAM” wasn’t in the model vocabulary, so it’s been split in “V”, “RA” and “M”. To indicate those tokens are not separate words but parts of the same word, a double-hash prefix is added for “RA” and “M”: >>> print(tokenized_sequence) ['A', 'Titan', 'R', '##T', '##X', 'has', '24', '##GB', 'of', 'V', '##RA', '##M'] These tokens can then be converted into IDs which are understandable by the model. This can be done by directly feeding the sentence to the tokenizer, which leverages the Rust implementation of 🤗 Tokenizers for peak performance. >>> inputs = tokenizer(sequence) The tokenizer returns a dictionary with all the arguments necessary for its corresponding model to work properly. The token indices are under the key input_ids: >>> encoded_sequence = inputs["input_ids"] >>> print(encoded_sequence) [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102] Note that the tokenizer automatically adds “special tokens” (if the associated model relies on them) which are special IDs the model sometimes uses. If we decode the previous sequence of ids, >>> decoded_sequence = tokenizer.decode(encoded_sequence) we will see >>> print(decoded_sequence) [CLS] A Titan RTX has 24GB of VRAM [SEP] because this is the way a BertModel is going to expect its inputs. L labels The labels are an optional argument which can be passed in order for the model to compute the loss itself. These labels should be the expected prediction of the model: it will use the standard loss in order to compute the loss between its predictions and the expected value (the label). These labels are different according to the model head, for example: For sequence classification models, (BertForSequenceClassification), the model expects a tensor of dimension (batch_size) with each value of the batch corresponding to the expected label of the entire sequence. For token classification models, (BertForTokenClassification), the model expects a tensor of dimension (batch_size, seq_length) with each value corresponding to the expected label of each individual token. For masked language modeling, (BertForMaskedLM), the model expects a tensor of dimension (batch_size, seq_length) with each value corresponding to the expected label of each individual token: the labels being the token ID for the masked token, and values to be ignored for the rest (usually -100). For sequence to sequence tasks, (BartForConditionalGeneration, MBartForConditionalGeneration), the model expects a tensor of dimension (batch_size, tgt_seq_length) with each value corresponding to the target sequences associated with each input sequence. During training, both BART and T5 will make the appropriate decoder_input_ids and decoder attention masks internally. They usually do not need to be supplied. This does not apply to models leveraging the Encoder-Decoder framework. For image classification models, (ViTForImageClassification), the model expects a tensor of dimension (batch_size) with each value of the batch corresponding to the expected label of each individual image. For semantic segmentation models, (SegformerForSemanticSegmentation), the model expects a tensor of dimension (batch_size, height, width) with each value of the batch corresponding to the expected label of each individual pixel. For object detection models, (DetrForObjectDetection), the model expects a list of dictionaries with a class_labels and boxes key where each value of the batch corresponds to the expected label and number of bounding boxes of each individual image. For automatic speech recognition models, (Wav2Vec2ForCTC), the model expects a tensor of dimension (batch_size, target_length) with each value corresponding to the expected label of each individual token. Each model’s labels may be different, so be sure to always check the documentation of each model for more information about their specific labels! The base models (BertModel) do not accept labels, as these are the base transformer models, simply outputting features. large language models (LLM) A generic term that refers to transformer language models (GPT-3, BLOOM, OPT) that were trained on a large quantity of data. These models also tend to have a large number of learnable parameters (e.g. 175 billion for GPT-3). M masked language modeling (MLM) A pretraining task where the model sees a corrupted version of the texts, usually done by masking some tokens randomly, and has to predict the original text. multimodal A task that combines texts with another kind of inputs (for instance images). N Natural language generation (NLG) All tasks related to generating text (for instance, Write With Transformers, translation). Natural language processing (NLP) A generic way to say “deal with texts”. Natural language understanding (NLU) All tasks related to understanding what is in a text (for instance classifying the whole text, individual words). P pipeline A pipeline in 🤗 Transformers is an abstraction referring to a series of steps that are executed in a specific order to preprocess and transform data and return a prediction from a model. Some example stages found in a pipeline might be data preprocessing, feature extraction, and normalization. For more details, see Pipelines for inference. pixel values A tensor of the numerical representations of an image that is passed to a model. The pixel values have a shape of [batch_size, num_channels, height, width], and are generated from an image processor. pooling An operation that reduces a matrix into a smaller matrix, either by taking the maximum or average of the pooled dimension(s). Pooling layers are commonly found between convolutional layers to downsample the feature representation. position IDs Contrary to RNNs that have the position of each token embedded within them, transformers are unaware of the position of each token. Therefore, the position IDs (position_ids) are used by the model to identify each token’s position in the list of tokens. They are an optional parameter. If no position_ids are passed to the model, the IDs are automatically created as absolute positional embeddings. Absolute positional embeddings are selected in the range [0, config.max_position_embeddings - 1]. Some models use other types of positional embeddings, such as sinusoidal position embeddings or relative position embeddings. preprocessing The task of preparing raw data into a format that can be easily consumed by machine learning models. For example, text is typically preprocessed by tokenization. To gain a better idea of what preprocessing looks like for other input types, check out the Preprocess tutorial. pretrained model A model that has been pretrained on some data (for instance all of Wikipedia). Pretraining methods involve a self-supervised objective, which can be reading the text and trying to predict the next word (see causal language modeling) or masking some words and trying to predict them (see masked language modeling). Speech and vision models have their own pretraining objectives. For example, Wav2Vec2 is a speech model pretrained on a contrastive task which requires the model to identify the “true” speech representation from a set of “false” speech representations. On the other hand, BEiT is a vision model pretrained on a masked image modeling task which masks some of the image patches and requires the model to predict the masked patches (similar to the masked language modeling objective). R recurrent neural network (RNN) A type of model that uses a loop over a layer to process texts. representation learning A subfield of machine learning which focuses on learning meaningful representations of raw data. Some examples of representation learning techniques include word embeddings, autoencoders, and Generative Adversarial Networks (GANs). S sampling rate A measurement in hertz of the number of samples (the audio signal) taken per second. The sampling rate is a result of discretizing a continuous signal such as speech. self-attention Each element of the input finds out which other elements of the input they should attend to. self-supervised learning A category of machine learning techniques in which a model creates its own learning objective from unlabeled data. It differs from unsupervised learning and supervised learning in that the learning process is supervised, but not explicitly from the user. One example of self-supervised learning is masked language modeling, where a model is passed sentences with a proportion of its tokens removed and learns to predict the missing tokens. semi-supervised learning A broad category of machine learning training techniques that leverages a small amount of labeled data with a larger quantity of unlabeled data to improve the accuracy of a model, unlike supervised learning and unsupervised learning. An example of a semi-supervised learning approach is “self-training”, in which a model is trained on labeled data, and then used to make predictions on the unlabeled data. The portion of the unlabeled data that the model predicts with the most confidence gets added to the labeled dataset and used to retrain the model. sequence-to-sequence (seq2seq) Models that generate a new sequence from an input, like translation models, or summarization models (such as Bart or T5). stride In convolution or pooling, the stride refers to the distance the kernel is moved over a matrix. A stride of 1 means the kernel is moved one pixel over at a time, and a stride of 2 means the kernel is moved two pixels over at a time. supervised learning A form of model training that directly uses labeled data to correct and instruct model performance. Data is fed into the model being trained, and its predictions are compared to the known labels. The model updates its weights based on how incorrect its predictions were, and the process is repeated to optimize model performance. T token A part of a sentence, usually a word, but can also be a subword (non-common words are often split in subwords) or a punctuation symbol. token Type IDs Some models’ purpose is to do classification on pairs of sentences or question answering. These require two different sequences to be joined in a single “input_ids” entry, which usually is performed with the help of special tokens, such as the classifier ([CLS]) and separator ([SEP]) tokens. For example, the BERT model builds its two sequence input as such: We can use our tokenizer to automatically generate such a sentence by passing the two sequences to tokenizer as two arguments (and not a list, like before) like this: >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> sequence_a = "HuggingFace is based in NYC" >>> sequence_b = "Where is HuggingFace based?" >>> encoded_dict = tokenizer(sequence_a, sequence_b) >>> decoded = tokenizer.decode(encoded_dict["input_ids"]) which will return: >>> print(decoded) [CLS] HuggingFace is based in NYC [SEP] Where is HuggingFace based? [SEP] This is enough for some models to understand where one sequence ends and where another begins. However, other models, such as BERT, also deploy token type IDs (also called segment IDs). They are represented as a binary mask identifying the two types of sequence in the model. The tokenizer returns this mask as the “token_type_ids” entry: >>> encoded_dict["token_type_ids"] [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1] The first sequence, the “context” used for the question, has all its tokens represented by a 0, whereas the second sequence, corresponding to the “question”, has all its tokens represented by a 1. Some models, like XLNetModel use an additional token represented by a 2. transfer learning A technique that involves taking a pretrained model and adapting it to a dataset specific to your task. Instead of training a model from scratch, you can leverage knowledge obtained from an existing model as a starting point. This speeds up the learning process and reduces the amount of training data needed. transformer Self-attention based deep learning model architecture. U unsupervised learning A form of model training in which data provided to the model is not labeled. Unsupervised learning techniques leverage statistical information of the data distribution to find patterns useful for the task at hand.
https://huggingface.co/docs/transformers/model_doc/chinese_clip
Chinese-CLIP Overview The Chinese-CLIP model was proposed in Chinese CLIP: Contrastive Vision-Language Pretraining in Chinese by An Yang, Junshu Pan, Junyang Lin, Rui Men, Yichang Zhang, Jingren Zhou, Chang Zhou. Chinese-CLIP is an implementation of CLIP (Radford et al., 2021) on a large-scale dataset of Chinese image-text pairs. It is capable of performing cross-modal retrieval and also playing as a vision backbone for vision tasks like zero-shot image classification, open-domain object detection, etc. The original Chinese-CLIP code is released at this link. The abstract from the paper is the following: The tremendous success of CLIP (Radford et al., 2021) has promoted the research and application of contrastive learning for vision-language pretraining. In this work, we construct a large-scale dataset of image-text pairs in Chinese, where most data are retrieved from publicly available datasets, and we pretrain Chinese CLIP models on the new dataset. We develop 5 Chinese CLIP models of multiple sizes, spanning from 77 to 958 million parameters. Furthermore, we propose a two-stage pretraining method, where the model is first trained with the image encoder frozen and then trained with all parameters being optimized, to achieve enhanced model performance. Our comprehensive experiments demonstrate that Chinese CLIP can achieve the state-of-the-art performance on MUGE, Flickr30K-CN, and COCO-CN in the setups of zero-shot learning and finetuning, and it is able to achieve competitive performance in zero-shot image classification based on the evaluation on the ELEVATER benchmark (Li et al., 2022). Our codes, pretrained models, and demos have been released. Usage The code snippet below shows how to compute image & text features and similarities: >>> from PIL import Image >>> import requests >>> from transformers import ChineseCLIPProcessor, ChineseCLIPModel >>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> processor = ChineseCLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> >>> texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"] >>> >>> inputs = processor(images=image, return_tensors="pt") >>> image_features = model.get_image_features(**inputs) >>> image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) >>> >>> inputs = processor(text=texts, padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) >>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) >>> >>> inputs = processor(text=texts, images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) Currently, we release the following scales of pretrained Chinese-CLIP models at HF Model Hub: OFA-Sys/chinese-clip-vit-base-patch16 OFA-Sys/chinese-clip-vit-large-patch14 OFA-Sys/chinese-clip-vit-large-patch14-336px OFA-Sys/chinese-clip-vit-huge-patch14 The Chinese-CLIP model was contributed by OFA-Sys. ChineseCLIPConfig class transformers.ChineseCLIPConfig < source > ( text_config = None vision_config = None projection_dim = 512 logit_scale_init_value = 2.6592 **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize ChineseCLIPTextConfig. vision_config (dict, optional) — Dictionary of configuration options used to initialize ChineseCLIPVisionConfig. projection_dim (int, optional, defaults to 512) — Dimentionality of text and vision projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original ChineseCLIP implementation. kwargs (optional) — Dictionary of keyword arguments. ChineseCLIPConfig is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate Chinese-CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the Chinese-CLIP OFA-Sys/chinese-clip-vit-base-patch16 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ChineseCLIPConfig, ChineseCLIPModel >>> >>> configuration = ChineseCLIPConfig() >>> >>> model = ChineseCLIPModel(configuration) >>> >>> configuration = model.config >>> >>> >>> config_text = ChineseCLIPTextConfig() >>> config_vision = ChineseCLIPVisionConfig() >>> config = ChineseCLIPConfig.from_text_vision_configs(config_text, config_vision) from_text_vision_configs < source > ( text_config: ChineseCLIPTextConfig vision_config: ChineseCLIPVisionConfig **kwargs ) Instantiate a ChineseCLIPConfig (or a derived class) from Chinese-CLIP text model configuration and Chinese-CLIP vision model configuration. Returns: ChineseCLIPConfig: An instance of a configuration object ChineseCLIPTextConfig class transformers.ChineseCLIPTextConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 initializer_factor = 1.0 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' use_cache = True **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the CHINESE_CLIP model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ChineseCLIPModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling ChineseCLIPModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. This is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate a Chinese CLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Chinese CLIP [OFA-Sys/chinese-clip-vit-base-patch16](https: //huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ChineseCLIPTextConfig, ChineseCLIPTextModel >>> >>> configuration = ChineseCLIPTextConfig() >>> >>> model = ChineseCLIPTextModel(configuration) >>> >>> configuration = model.config ChineseCLIPVisionConfig class transformers.ChineseCLIPVisionConfig < source > ( hidden_size = 768 intermediate_size = 3072 projection_dim = 512 num_hidden_layers = 12 num_attention_heads = 12 num_channels = 3 image_size = 224 patch_size = 32 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 32) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (`float“, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a ChineseCLIPModel. It is used to instantiate an ChineseCLIP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ChineseCLIP [OFA-Sys/chinese-clip-vit-base-patch16](https: //huggingface.co/OFA-Sys/chinese-clip-vit-base-patch16) architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ChineseCLIPVisionConfig, ChineseCLIPVisionModel >>> >>> configuration = ChineseCLIPVisionConfig() >>> >>> model = ChineseCLIPVisionModel(configuration) >>> >>> configuration = model.config ChineseCLIPImageProcessor class transformers.ChineseCLIPImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True crop_size: typing.Dict[str, int] = None do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by do_resize in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}): Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method. do_center_crop (bool, optional, defaults to True) — Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the preprocess method. crop_size (Dict[str, int] optional, defaults to 224) — Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess method. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess method. do_normalize — Whether to normalize the image. Can be overridden by do_normalize in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Can be overridden by the image_std parameter in the preprocess method. do_convert_rgb (bool, optional, defaults to True) — Whether to convert the image to RGB. Constructs a Chinese-CLIP image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None resample: Resampling = None do_center_crop: bool = None crop_size: int = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the center crop. Only has an effect if do_center_crop is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to use for normalization. Only has an effect if do_normalize is set to True. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to use for normalization. Only has an effect if do_normalize is set to True. do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) — Whether to convert the image to RGB. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. ChineseCLIPFeatureExtractor ChineseCLIPProcessor class transformers.ChineseCLIPProcessor < source > ( image_processor = None tokenizer = None **kwargs ) Parameters image_processor (ChineseCLIPImageProcessor) — The image processor is a required input. tokenizer (BertTokenizerFast) — The tokenizer is a required input. Constructs a Chinese-CLIP processor which wraps a Chinese-CLIP image processor and a Chinese-CLIP tokenizer into a single processor. ChineseCLIPProcessor offers all the functionalities of ChineseCLIPImageProcessor and BertTokenizerFast. See the __call__() and decode() for more information. This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to the docstring of this method for more information. ChineseCLIPModel class transformers.ChineseCLIPModel < source > ( config: ChineseCLIPConfig ) Parameters config (ChineseCLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput or tuple(torch.FloatTensor) A transformers.models.chinese_clip.modeling_chinese_clip.ChineseCLIPOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.chinese_clip.configuration_chinese_clip.ChineseCLIPConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of ChineseCLIPTextModel. image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of ChineseCLIPVisionModel. text_model_output(BaseModelOutputWithPoolingAndCrossAttentions): The output of the ChineseCLIPTextModel. vision_model_output(BaseModelOutputWithPoolingAndCrossAttentions): The output of the ChineseCLIPVisionModel. The ChineseCLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, ChineseCLIPModel >>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> processor = AutoProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (torch.FloatTensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the final [CLS] hidden state of Text-Transformer. The ChineseCLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, ChineseCLIPModel >>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> tokenizer = AutoTokenizer.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> inputs = tokenizer(["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) >>> text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) get_image_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns image_features (torch.FloatTensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the final [CLS] hidden state of Vision-Transformer. The ChineseCLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, ChineseCLIPModel >>> model = ChineseCLIPModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> processor = AutoProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> image_features = model.get_image_features(**inputs) >>> image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) ChineseCLIPTextModel class transformers.ChineseCLIPTextModel < source > ( config add_pooling_layer = True ) Parameters config (ChineseCLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The text model from CHINESE_CLIP without any head or projection on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ChineseCLIPConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The ChineseCLIPTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ChineseCLIPTextModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> model = ChineseCLIPTextModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ChineseCLIPVisionModel class transformers.ChineseCLIPVisionModel < source > ( config: ChineseCLIPVisionConfig ) Parameters config (ChineseCLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The vision model from CHINESE_CLIP without any head or projection on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ChineseCLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.chinese_clip.configuration_chinese_clip.ChineseCLIPVisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ChineseCLIPVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import CLIPProcessor, ChineseCLIPVisionModel >>> model = ChineseCLIPVisionModel.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> processor = CLIPProcessor.from_pretrained("OFA-Sys/chinese-clip-vit-base-patch16") >>> url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output
https://huggingface.co/docs/transformers/model_doc/camembert
CamemBERT Overview The CamemBERT model was proposed in CamemBERT: a Tasty French Language Model by Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric Villemonte de la Clergerie, Djamé Seddah, and Benoît Sagot. It is based on Facebook’s RoBERTa model released in 2019. It is a model trained on 138GB of French text. The abstract from the paper is the following: Pretrained language models are now ubiquitous in Natural Language Processing. Despite their success, most available models have either been trained on English data or on the concatenation of data in multiple languages. This makes practical use of such models —in all languages except English— very limited. Aiming to address this issue for French, we release CamemBERT, a French version of the Bi-directional Encoders for Transformers (BERT). We measure the performance of CamemBERT compared to multilingual models in multiple downstream tasks, namely part-of-speech tagging, dependency parsing, named-entity recognition, and natural language inference. CamemBERT improves the state of the art for most of the tasks considered. We release the pretrained model for CamemBERT hoping to foster research and downstream applications for French NLP. Tips: This implementation is the same as RoBERTa. Refer to the documentation of RoBERTa for usage examples as well as the information relative to the inputs and outputs. This model was contributed by camembert. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide CamembertConfig class transformers.CamembertConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' use_cache = True classifier_dropout = None **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling CamembertModel or TFCamembertModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling CamembertModel or TFCamembertModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. classifier_dropout (float, optional) — The dropout ratio for the classification head. This is the configuration class to store the configuration of a CamembertModel or a TFCamembertModel. It is used to instantiate a Camembert model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Camembert camembert-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CamembertConfig, CamembertModel >>> >>> configuration = CamembertConfig() >>> >>> model = CamembertModel(configuration) >>> >>> configuration = model.config CamembertTokenizer class transformers.CamembertTokenizer < source > ( vocab_file bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' additional_special_tokens = ['<s>NOTUSED', '</s>NOTUSED'] sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (str, optional, defaults to "</s>") — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) — Additional special tokens used by the tokenizer. sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set: enable_sampling: Enable subword regularization. nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout. nbest_size = {0,1}: No sampling is performed. nbest_size > 1: samples from the nbest_size results. nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs). Adapted from RobertaTokenizer and XLNetTokenizer. Construct a CamemBERT tokenizer. Based on SentencePiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An CamemBERT sequence has the following format: single sequence: <s> X </s> pair of sequences: <s> A </s></s> B </s> get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. CamemBERT, like RoBERTa, does not make use of token type ids, therefore a list of zeros is returned. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) CamembertTokenizerFast class transformers.CamembertTokenizerFast < source > ( vocab_file = None tokenizer_file = None bos_token = '<s>' eos_token = '</s>' sep_token = '</s>' cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' additional_special_tokens = ['<s>NOTUSED', '</s>NOTUSED'] **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (str, optional, defaults to "</s>") — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "<s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. additional_special_tokens (List[str], optional, defaults to ["<s>NOTUSED", "</s>NOTUSED"]) — Additional special tokens used by the tokenizer. Construct a “fast” CamemBERT tokenizer (backed by HuggingFace’s tokenizers library). Adapted from RobertaTokenizer and XLNetTokenizer. Based on BPE. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An CamemBERT sequence has the following format: single sequence: <s> X </s> pair of sequences: <s> A </s></s> B </s> create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed to be used in a sequence-pair classification task. CamemBERT, like RoBERTa, does not make use of token type ids, therefore a list of zeros is returned. CamembertModel class transformers.CamembertModel < source > ( config add_pooling_layer = True ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as a decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. .. _Attention is all you need: https://arxiv.org/abs/1706.03762 forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The CamembertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CamembertModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = CamembertModel.from_pretrained("camembert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state CamembertForCausalLM class transformers.CamembertForCausalLM < source > ( config ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a language modeling head on top for CLM fine-tuning. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The CamembertForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CamembertForCausalLM, AutoConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> config = AutoConfig.from_pretrained("camembert-base") >>> config.is_decoder = True >>> model = CamembertForCausalLM.from_pretrained("camembert-base", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits CamembertForMaskedLM class transformers.CamembertForMaskedLM < source > ( config ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CamembertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CamembertForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = CamembertForMaskedLM.from_pretrained("camembert-base") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> tokenizer.decode(predicted_token_id) ' Paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 0.1 CamembertForSequenceClassification class transformers.CamembertForSequenceClassification < source > ( config ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CamembertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, CamembertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> model = CamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'optimism' >>> >>> num_labels = len(model.config.id2label) >>> model = CamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.08 Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, CamembertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> model = CamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = CamembertForSequenceClassification.from_pretrained( ... "cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss CamembertForMultipleChoice class transformers.CamembertForMultipleChoice < source > ( config ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CamembertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CamembertForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = CamembertForMultipleChoice.from_pretrained("camembert-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits CamembertForTokenClassification class transformers.CamembertForTokenClassification < source > ( config ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CamembertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CamembertForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Jean-Baptiste/roberta-large-ner-english") >>> model = CamembertForTokenClassification.from_pretrained("Jean-Baptiste/roberta-large-ner-english") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> predicted_tokens_classes ['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC'] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.01 CamembertForQuestionAnswering class transformers.CamembertForQuestionAnswering < source > ( config ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CamembertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CamembertForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2") >>> model = CamembertForQuestionAnswering.from_pretrained("deepset/roberta-base-squad2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True) ' puppet' >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss >>> round(loss.item(), 2) 0.86 TFCamembertModel class transformers.TFCamembertModel < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare CamemBERT Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The TFCamembertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = TFCamembertModel.from_pretrained("camembert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFCamembertForCasualLM class transformers.TFCamembertForCausalLM < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a language modeling head on top for CLM fine-tuning. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The TFCamembertForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertForCausalLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = TFCamembertForCausalLM.from_pretrained("camembert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits TFCamembertForMaskedLM class transformers.TFCamembertForMaskedLM < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a language modeling head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCamembertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = TFCamembertForMaskedLM.from_pretrained("camembert-base") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> tokenizer.decode(predicted_token_id) ' Paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(float(outputs.loss), 2) 0.1 TFCamembertForSequenceClassification class transformers.TFCamembertForSequenceClassification < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCamembertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> model = TFCamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'optimism' >>> >>> num_labels = len(model.config.id2label) >>> model = TFCamembertForSequenceClassification.from_pretrained("cardiffnlp/twitter-roberta-base-emotion", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss >>> round(float(loss), 2) 0.08 TFCamembertForMultipleChoice class transformers.TFCamembertForMultipleChoice < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCamembertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("camembert-base") >>> model = TFCamembertForMultipleChoice.from_pretrained("camembert-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits TFCamembertForTokenClassification class transformers.TFCamembertForTokenClassification < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCamembertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-large-ner-english") >>> model = TFCamembertForTokenClassification.from_pretrained("ydshieh/roberta-large-ner-english") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_tokens_classes ['O', 'ORG', 'ORG', 'O', 'O', 'O', 'O', 'O', 'LOC', 'O', 'LOC', 'LOC'] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) >>> round(float(loss), 2) 0.01 TFCamembertForQuestionAnswering class transformers.TFCamembertForQuestionAnswering < source > ( *args **kwargs ) Parameters config (CamembertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CamemBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CamembertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCamembertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCamembertForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/roberta-base-squad2") >>> model = TFCamembertForQuestionAnswering.from_pretrained("ydshieh/roberta-base-squad2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) ' puppet' >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) >>> round(float(loss), 2) 0.86
https://huggingface.co/docs/transformers/model_doc/clap
CLAP Overview The CLAP model was proposed in Large Scale Contrastive Language-Audio pretraining with feature fusion and keyword-to-caption augmentation by Yusong Wu, Ke Chen, Tianyu Zhang, Yuchen Hui, Taylor Berg-Kirkpatrick, Shlomo Dubnov. CLAP (Contrastive Language-Audio Pretraining) is a neural network trained on a variety of (audio, text) pairs. It can be instructed in to predict the most relevant text snippet, given an audio, without directly optimizing for the task. The CLAP model uses a SWINTransformer to get audio features from a log-Mel spectrogram input, and a RoBERTa model to get text features. Both the text and audio features are then projected to a latent space with identical dimension. The dot product between the projected audio and text features is then used as a similar score. The abstract from the paper is the following: Contrastive learning has shown remarkable success in the field of multimodal representation learning. In this paper, we propose a pipeline of contrastive language-audio pretraining to develop an audio representation by combining audio data with natural language descriptions. To accomplish this target, we first release LAION-Audio-630K, a large collection of 633,526 audio-text pairs from different data sources. Second, we construct a contrastive language-audio pretraining model by considering different audio encoders and text encoders. We incorporate the feature fusion mechanism and keyword-to-caption augmentation into the model design to further enable the model to process audio inputs of variable lengths and enhance the performance. Third, we perform comprehensive experiments to evaluate our model across three tasks: text-to-audio retrieval, zero-shot audio classification, and supervised audio classification. The results demonstrate that our model achieves superior performance in text-to-audio retrieval task. In audio classification tasks, the model achieves state-of-the-art performance in the zeroshot setting and is able to obtain performance comparable to models’ results in the non-zero-shot setting. LAION-Audio-6 This model was contributed by Younes Belkada and Arthur Zucker . The original code can be found here. ClapConfig class transformers.ClapConfig < source > ( text_config = None audio_config = None logit_scale_init_value = 14.285714285714285 projection_dim = 512 projection_hidden_act = 'relu' initializer_factor = 1.0 **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize ClapTextConfig. audio_config (dict, optional) — Dictionary of configuration options used to initialize ClapAudioConfig. projection_dim (int, optional, defaults to 512) — Dimentionality of text and audio projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLAP implementation. projection_hidden_act (str, optional, defaults to "relu") — Activation function for the projection layers. initializer_factor (float, optional, defaults to 1.0) — Factor to scale the initialization of the model weights. kwargs (optional) — Dictionary of keyword arguments. ClapConfig is the configuration class to store the configuration of a ClapModel. It is used to instantiate a CLAP model according to the specified arguments, defining the text model and audio model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLAP laion/clap-htsat-fused architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ClapConfig, ClapModel >>> >>> configuration = ClapConfig() >>> >>> model = ClapModel(configuration) >>> >>> configuration = model.config >>> >>> from transformers import ClapTextConfig, ClapAudioConfig >>> >>> config_text = ClapTextConfig() >>> config_audio = ClapAudioConfig() >>> config = ClapConfig.from_text_audio_configs(config_text, config_audio) from_text_audio_configs < source > ( text_config: ClapTextConfig audio_config: ClapAudioConfig **kwargs ) → ClapConfig An instance of a configuration object Instantiate a ClapConfig (or a derived class) from clap text model configuration and clap audio model configuration. ClapTextConfig class transformers.ClapTextConfig < source > ( vocab_size = 50265 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 514 type_vocab_size = 1 initializer_factor = 1.0 layer_norm_eps = 1e-12 projection_dim = 512 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' use_cache = True projection_hidden_act = 'relu' **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the CLAP model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ClapTextModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "relu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "relu", "relu", "silu" and "relu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling ClapTextModel. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. projection_hidden_act (str, optional, defaults to "relu") — The non-linear activation function (function or string) in the projection layer. If string, "gelu", "relu", "silu" and "gelu_new" are supported. projection_dim (int, optional, defaults to 512) — Dimension of the projection head of the ClapTextModelWithProjection. This is the configuration class to store the configuration of a ClapTextModel. It is used to instantiate a CLAP model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLAP calp-hsat-fused architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import ClapTextConfig, ClapTextModel >>> >>> configuration = ClapTextConfig() >>> >>> model = ClapTextModel(configuration) >>> >>> configuration = model.config ClapAudioConfig class transformers.ClapAudioConfig < source > ( window_size = 8 num_mel_bins = 64 spec_size = 256 hidden_act = 'gelu' patch_size = 4 patch_stride = [4, 4] num_classes = 527 hidden_size = 768 projection_dim = 512 depths = [2, 2, 6, 2] num_attention_heads = [4, 8, 16, 32] enable_fusion = False hidden_dropout_prob = 0.1 fusion_type = None patch_embed_input_channels = 1 flatten_patch_embeds = True patch_embeds_hidden_size = 96 enable_patch_layer_norm = True drop_path_rate = 0.0 attention_probs_dropout_prob = 0.0 qkv_bias = True mlp_ratio = 4.0 aff_block_r = 4 num_hidden_layers = 4 projection_hidden_act = 'relu' layer_norm_eps = 1e-05 initializer_factor = 1.0 **kwargs ) Parameters window_size (int, optional, defaults to 8) — Image size of the spectrogram num_mel_bins (int, optional, defaults to 64) — Number of mel features used per frames. Should correspond to the value used in the ClapProcessor class. spec_size (int, optional, defaults to 256) — Desired input size of the spectrogram that the model supports. It can be different from the output of the ClapFeatureExtractor, in which case the input features will be resized. Corresponds to the image_size of the audio models. hidden_act (str, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. patch_size (int, optional, defaults to 4) — Patch size for the audio spectrogram patch_stride (list, optional, defaults to [4, 4]) — Patch stride for the audio spectrogram num_classes (int, optional, defaults to 527) — Number of classes used for the head training hidden_size (int, optional, defaults to 768) — Hidden size of the output of the audio encoder. Correspond to the dimension of the penultimate layer’s output,which is sent to the projection MLP layer. projection_dim (int, optional, defaults to 512) — Hidden size of the projection layer. depths (list, optional, defaults to [2, 2, 6, 2]) — Depths used for the Swin Layers of the audio model num_attention_heads (list, optional, defaults to [4, 8, 16, 32]) — Number of attention heads used for the Swin Layers of the audio model enable_fusion (bool, optional, defaults to False) — Whether or not to enable patch fusion. This is the main contribution of the authors, and should give the best results. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the encoder. fusion_type ([type], optional) — Fusion type used for the patch fusion. patch_embed_input_channels (int, optional, defaults to 1) — Number of channels used for the input spectrogram flatten_patch_embeds (bool, optional, defaults to True) — Whether or not to flatten the patch embeddings patch_embeds_hidden_size (int, optional, defaults to 96) — Hidden size of the patch embeddings. It is used as the number of output channels. enable_patch_layer_norm (bool, optional, defaults to True) — Whether or not to enable layer normalization for the patch embeddings drop_path_rate (float, optional, defaults to 0.0) — Drop path rate for the patch fusion attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. qkv_bias (bool, optional, defaults to True) — Whether or not to add a bias to the query, key, value projections. mlp_ratio (float, optional, defaults to 4.0) — Ratio of the mlp hidden dim to embedding dim. aff_block_r (int, optional, defaults to 4) — downsize_ratio used in the AudioFF block num_hidden_layers (int, optional, defaults to 4) — Number of hidden layers in the Transformer encoder. projection_hidden_act (str, optional, defaults to "relu") — The non-linear activation function (function or string) in the projection layer. If string, "gelu", "relu", "silu" and "gelu_new" are supported. layer_norm_eps ([type], optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. initializer_factor (float, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a ClapAudioModel. It is used to instantiate a CLAP audio encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the audio encoder of the CLAP laion/clap-htsat-fused architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ClapAudioConfig, ClapAudioModel >>> >>> configuration = ClapAudioConfig() >>> >>> model = ClapAudioModel(configuration) >>> >>> configuration = model.config ClapFeatureExtractor ( feature_size = 64 sampling_rate = 48000 hop_length = 480 max_length_s = 10 fft_window_size = 1024 padding_value = 0.0 return_attention_mask = False frequency_min: float = 0 frequency_max: float = 14000 top_db: int = None truncation: str = 'fusion' padding: str = 'repeatpad' **kwargs ) Parameters feature_size (int, defaults to 64) — The feature dimension of the extracted Mel spectrograms. This corresponds to the number of mel filters (n_mels). sampling_rate (int, defaults to 48_000) — The sampling rate at which the audio files should be digitalized expressed in hertz (Hz). This only serves to warn users if the audio fed to the feature extractor does not have the same sampling rate. hop_length (int, defaults to 480) — Length of the overlaping windows for the STFT used to obtain the Mel Spectrogram. The audio will be split in smaller frames with a step of hop_length between each frame. max_length_s (int, defaults to 10) — The maximum input length of the model in seconds. This is used to pad the audio. fft_window_size (int, defaults to 1024) — Size of the window (in samples) on which the Fourier transform is applied. This controls the frequency resolution of the spectrogram. 400 means that the fourrier transform is computed on windows of 400 samples. padding_value (float, optional, defaults to 0.0) — Padding value used to pad the audio. Should correspond to silences. return_attention_mask (bool, optional, defaults to False) — Whether or not the model should return the attention masks coresponding to the input. frequency_min (float, optional, default to 0) — The lowest frequency of interest. The STFT will not be computed for values below this. frequency_max (float, optional, default to 14_000) — The highest frequency of interest. The STFT will not be computed for values above this. top_db (float, optional) — The highest decibel value used to convert the mel spectrogram to the log scale. For more details see the audio_utils.power_to_db function truncation (str, optional, default to "fusions") — Truncation pattern for long audio inputs. Two patterns are available: fusion will use _random_mel_fusion, which stacks 3 random crops from the mel spectrogram and a downsampled version of the entire mel spectrogram. If config.fusion is set to True, shorter audios also need to to return 4 mels, which will just be a copy of the original mel obtained from the padded audio. rand_trunc will select a random crop of the mel spectrogram. padding (str, optional, defaults to "repeatpad") — Padding pattern for shorter audio inputs. Three patterns were originally implemented: repeatpad: the audio is repeated, and then padded to fit the max_length. repeat: the audio is repeated and then cut to fit the max_length pad: the audio is padded. Constructs a CLAP feature extractor. This feature extractor inherits from SequenceFeatureExtractor which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. This class extracts mel-filter bank features from raw speech using a custom numpy implementation of the Short Time Fourier Transform (STFT) which should match pytorch’s torch.stft equivalent. ( ) → Dict[str, Any] Dictionary of all the attributes that make up this configuration instance, excpet for the mel filter banks, which do not need to be saved or printed as they are too long. Serializes this instance to a Python dictionary. ClapProcessor Constructs a CLAP processor which wraps a CLAP feature extractor and a RoBerta tokenizer into a single processor. ClapProcessor offers all the functionalities of ClapFeatureExtractor and RobertaTokenizerFast. See the __call__() and decode() for more information. This method forwards all its arguments to RobertaTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to RobertaTokenizerFast’s decode(). Please refer to the docstring of this method for more information. ClapModel class transformers.ClapModel < source > ( config: ClapConfig ) Parameters config (ClapConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None input_features: typing.Optional[torch.FloatTensor] = None is_longer: typing.Optional[torch.BoolTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clap.modeling_clap.ClapOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clap.modeling_clap.ClapOutput or tuple(torch.FloatTensor) A transformers.models.clap.modeling_clap.ClapOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for audio-text similarity. logits_per_audio:(torch.FloatTensor of shape (audio_batch_size, text_batch_size)) — The scaled dot product scores between audio_embeds and text_embeds. This represents the audio-text similarity scores. logits_per_text:(torch.FloatTensor of shape (text_batch_size, audio_batch_size)) — The scaled dot product scores between text_embeds and audio_embeds. This represents the text-audio similarity scores. text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of ClapTextModel. audio_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The audio embeddings obtained by applying the projection layer to the pooled output of ClapAudioModel. text_model_output(BaseModelOutputWithPooling): The output of the ClapTextModel. audio_model_output(BaseModelOutputWithPooling): The output of the ClapAudioModel. The ClapModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from datasets import load_dataset >>> from transformers import AutoProcessor, ClapModel >>> dataset = load_dataset("ashraq/esc50") >>> audio_sample = dataset["train"]["audio"][0]["array"] >>> model = ClapModel.from_pretrained("laion/clap-htsat-unfused") >>> processor = AutoProcessor.from_pretrained("laion/clap-htsat-unfused") >>> input_text = ["Sound of a dog", "Sound of vaccum cleaner"] >>> inputs = processor(text=input_text, audios=audio_sample, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_audio = outputs.logits_per_audio >>> probs = logits_per_audio.softmax(dim=-1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (torch.FloatTensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of ClapTextModel. The ClapModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, ClapModel >>> model = ClapModel.from_pretrained("laion/clap-htsat-unfused") >>> tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused") >>> inputs = tokenizer(["the sound of a cat", "the sound of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) get_audio_features < source > ( input_features: typing.Optional[torch.Tensor] = None is_longer: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → audio_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details. is_longer (torch.FloatTensor, of shape (batch_size, 1), optional) — Whether the audio clip is longer than max_length. If True, a feature fusion will be enabled to enhance the features. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns audio_features (torch.FloatTensor of shape (batch_size, output_dim) The audio embeddings obtained by applying the projection layer to the pooled output of ClapAudioModel. The ClapModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoFeatureExtractor, ClapModel >>> import torch >>> model = ClapModel.from_pretrained("laion/clap-htsat-unfused") >>> feature_extractor = AutoFeatureExtractor.from_pretrained("laion/clap-htsat-unfused") >>> random_audio = torch.rand((16_000)) >>> inputs = feature_extractor(random_audio, return_tensors="pt") >>> audio_features = model.get_audio_features(**inputs) ClapTextModel class transformers.ClapTextModel < source > ( config add_pooling_layer = True ) The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. .. _Attention is all you need: https://arxiv.org/abs/1706.03762 forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional): Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional): Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)): Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional): If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). ClapTextModelWithProjection class transformers.ClapTextModelWithProjection < source > ( config: ClapTextConfig ) Parameters config (ClapConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CLAP Text Model with a projection layer on top (a linear layer on top of the pooled output). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clap.modeling_clap.ClapTextModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clap.modeling_clap.ClapTextModelOutput or tuple(torch.FloatTensor) A transformers.models.clap.modeling_clap.ClapTextModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapTextConfig'>) and inputs. text_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The text embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ClapTextModelWithProjection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, ClapTextModelWithProjection >>> model = ClapTextModelWithProjection.from_pretrained("laion/clap-htsat-unfused") >>> tokenizer = AutoTokenizer.from_pretrained("laion/clap-htsat-unfused") >>> inputs = tokenizer(["a sound of a cat", "a sound of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> text_embeds = outputs.text_embeds ClapAudioModel class transformers.ClapAudioModel < source > ( config: ClapAudioConfig ) forward < source > ( input_features: typing.Optional[torch.FloatTensor] = None is_longer: typing.Optional[torch.BoolTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details. is_longer (torch.FloatTensor, of shape (batch_size, 1), optional) — Whether the audio clip is longer than max_length. If True, a feature fusion will be enabled to enhance the features. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapAudioConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ClapAudioModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from datasets import load_dataset >>> from transformers import AutoProcessor, ClapAudioModel >>> dataset = load_dataset("ashraq/esc50") >>> audio_sample = dataset["train"]["audio"][0]["array"] >>> model = ClapAudioModel.from_pretrained("laion/clap-htsat-fused") >>> processor = AutoProcessor.from_pretrained("laion/clap-htsat-fused") >>> inputs = processor(audios=audio_sample, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state ClapAudioModelWithProjection class transformers.ClapAudioModelWithProjection < source > ( config: ClapAudioConfig ) Parameters config (ClapConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CLAP Audio Model with a projection layer on top (a linear layer on top of the pooled output). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_features: typing.Optional[torch.FloatTensor] = None is_longer: typing.Optional[torch.BoolTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clap.modeling_clap.ClapAudioModelOutput or tuple(torch.FloatTensor) Parameters input_features (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Input audio features. This should be returnes by the ClapFeatureExtractor class that you can also retrieve from AutoFeatureExtractor. See ClapFeatureExtractor.__call__() for details. is_longer (torch.FloatTensor, of shape (batch_size, 1), optional) — Whether the audio clip is longer than max_length. If True, a feature fusion will be enabled to enhance the features. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clap.modeling_clap.ClapAudioModelOutput or tuple(torch.FloatTensor) A transformers.models.clap.modeling_clap.ClapAudioModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clap.configuration_clap.ClapAudioConfig'>) and inputs. audio_embeds (torch.FloatTensor of shape (batch_size, hidden_size)) — The Audio embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. The ClapAudioModelWithProjection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from datasets import load_dataset >>> from transformers import ClapAudioModelWithProjection, ClapProcessor >>> model = ClapAudioModelWithProjection.from_pretrained("laion/clap-htsat-fused") >>> processor = ClapProcessor.from_pretrained("laion/clap-htsat-fused") >>> dataset = load_dataset("ashraq/esc50") >>> audio_sample = dataset["train"]["audio"][0]["array"] >>> inputs = processor(audios=audio_sample, return_tensors="pt") >>> outputs = model(**inputs) >>> audio_embeds = outputs.audio_embeds
https://huggingface.co/docs/transformers/model_doc/codegen
CodeGen Overview The CodeGen model was proposed in A Conversational Paradigm for Program Synthesis by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, and Caiming Xiong. CodeGen is an autoregressive language model for program synthesis trained sequentially on The Pile, BigQuery, and BigPython. The abstract from the paper is the following: Program synthesis strives to generate a computer program as a solution to a given problem specification. We propose a conversational program synthesis approach via large language models, which addresses the challenges of searching over a vast program space and user intent specification faced in prior approaches. Our new approach casts the process of writing a specification and program as a multi-turn conversation between a user and a system. It treats program synthesis as a sequence prediction problem, in which the specification is expressed in natural language and the desired program is conditionally sampled. We train a family of large language models, called CodeGen, on natural language and programming language data. With weak supervision in the data and the scaling up of data size and model size, conversational capacities emerge from the simple autoregressive language modeling. To study the model behavior on conversational program synthesis, we develop a multi-turn programming benchmark (MTPB), where solving each problem requires multi-step synthesis via multi-turn conversation between the user and the model. Our findings show the emergence of conversational capabilities and the effectiveness of the proposed conversational program synthesis paradigm. In addition, our model CodeGen (with up to 16B parameters trained on TPU-v4) outperforms OpenAI’s Codex on the HumanEval benchmark. We make the training library JaxFormer including checkpoints available as open source contribution: this https URL. This model was contributed by Hiroaki Hayashi. The original code can be found here. Checkpoint Naming CodeGen model checkpoints are available on different pre-training data with variable sizes. The format is: Salesforce/codegen-{size}-{data}, where size: 350M, 2B, 6B, 16B data: nl: Pre-trained on the Pile multi: Initialized with nl, then further pre-trained on multiple programming languages data mono: Initialized with multi, then further pre-trained on Python data For example, Salesforce/codegen-350M-mono offers a 350 million-parameter checkpoint pre-trained sequentially on the Pile, multiple programming languages, and Python. How to use >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> checkpoint = "Salesforce/codegen-350M-mono" >>> model = AutoModelForCausalLM.from_pretrained(checkpoint) >>> tokenizer = AutoTokenizer.from_pretrained(checkpoint) >>> text = "def hello_world():" >>> completion = model.generate(**tokenizer(text, return_tensors="pt")) >>> print(tokenizer.decode(completion[0])) def hello_world(): print("Hello World") hello_world() Documentation resources Causal language modeling task guide CodeGenConfig class transformers.CodeGenConfig < source > ( vocab_size = 50400 n_positions = 2048 n_ctx = 2048 n_embd = 4096 n_layer = 28 n_head = 16 rotary_dim = 64 n_inner = None activation_function = 'gelu_new' resid_pdrop = 0.0 embd_pdrop = 0.0 attn_pdrop = 0.0 layer_norm_epsilon = 1e-05 initializer_range = 0.02 use_cache = True bos_token_id = 50256 eos_token_id = 50256 tie_word_embeddings = False **kwargs ) Parameters vocab_size (int, optional, defaults to 50400) — Vocabulary size of the CodeGen model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling CodeGenModel. n_positions (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (int, optional, defaults to 4096) — Dimensionality of the embeddings and hidden states. n_layer (int, optional, defaults to 28) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. rotary_dim (int, optional, defaults to 64) — Number of dimensions in the embedding that Rotary Position Embedding is applied to. n_inner (int, optional, defaults to None) — Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd activation_function (str, optional, defaults to "gelu_new") — Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"]. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (int, optional, defaults to 0.1) — The dropout ratio for the embeddings. attn_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the attention. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a CodeGenModel. It is used to instantiate a CodeGen model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CodeGen Salesforce/codegen-2B-mono architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CodeGenConfig, CodeGenModel >>> >>> configuration = CodeGenConfig() >>> >>> model = CodeGenModel(configuration) >>> >>> configuration = model.config CodeGenTokenizer class transformers.CodeGenTokenizer < source > ( vocab_file merges_file errors = 'replace' unk_token = '<|endoftext|>' bos_token = '<|endoftext|>' eos_token = '<|endoftext|>' pad_token = None add_prefix_space = False add_bos_token = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CodeGen tokenizer detect beginning of words by the preceding space). Construct a CodeGen tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import CodeGenTokenizer >>> tokenizer = CodeGenTokenizer.from_pretrained("Salesforce/codegen-350M-mono") >>> tokenizer("Hello world")["input_ids"] [15496, 995] >>> tokenizer(" Hello world")["input_ids"] [18435, 995] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one). This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) CodeGenTokenizerFast class transformers.CodeGenTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<|endoftext|>' bos_token = '<|endoftext|>' eos_token = '<|endoftext|>' add_prefix_space = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (CodeGen tokenizer detect beginning of words by the preceding space). trim_offsets (bool, optional, defaults to True) — Whether or not the post-processing step should trim offsets to avoid including whitespaces. Construct a “fast” CodeGen tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import CodeGenTokenizerFast >>> tokenizer = CodeGenTokenizerFast.from_pretrained("Salesforce/codegen-350M-mono") >>> tokenizer("Hello world")["input_ids"] [15496, 995] >>> tokenizer(" Hello world")["input_ids"] [18435, 995] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. decode < source > ( token_ids: typing.Union[int, typing.List[int], ForwardRef('np.ndarray'), ForwardRef('torch.Tensor'), ForwardRef('tf.Tensor')] skip_special_tokens: bool = False clean_up_tokenization_spaces: bool = None truncate_before_pattern: typing.Optional[typing.List[str]] = None **kwargs ) → str Parameters token_ids (Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]) — List of tokenized input ids. Can be obtained using the __call__ method. skip_special_tokens (bool, optional, defaults to False) — Whether or not to remove special tokens in the decoding. clean_up_tokenization_spaces (bool, optional) — Whether or not to clean up the tokenization spaces. If None, will default to self.clean_up_tokenization_spaces (available in the tokenizer_config). truncate_before_pattern (List[str], optional, defaults to None) — A list of regular expression strings that will be used to truncate the returned string. This can be used to remove extra pieces of code (e.g. truncate if observing a comment symbol ”#” at the beginning of a new line). An example pattern could be `[”^#”, re.escape(”<|endoftext|>”), ”^'''”, ” The decoded sentence. Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special tokens and clean up tokenization spaces. Similar to doing self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids)). ”]`. kwargs (additional keyword arguments, optional): Will be passed to the underlying model specific decode method. CodeGenModel class transformers.CodeGenModel < source > ( config ) Parameters config (CodeGenConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare CodeGen Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoProcenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CodeGenConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CodeGenModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CodeGenModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") >>> model = CodeGenModel.from_pretrained("Salesforce/codegen-2B-mono") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state CodeGenForCausalLM class transformers.CodeGenForCausalLM < source > ( config ) Parameters config (CodeGenConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The CodeGen Model transformer with a language modeling head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoProcenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CodeGenConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CodeGenForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, CodeGenForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-2B-mono") >>> model = CodeGenForCausalLM.from_pretrained("Salesforce/codegen-2B-mono") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits
https://huggingface.co/docs/transformers/tasks_explained
How 🤗 Transformers solve tasks In What 🤗 Transformers can do, you learned about natural language processing (NLP), speech and audio, computer vision tasks, and some important applications of them. This page will look closely at how models solve these tasks and explain what’s happening under the hood. There are many ways to solve a given task, some models may implement certain techniques or even approach the task from a new angle, but for Transformer models, the general idea is the same. Owing to its flexible architecture, most models are a variant of an encoder, decoder, or encoder-decoder structure. In addition to Transformer models, our library also has several convolutional neural networks (CNNs), which are still used today for computer vision tasks. We’ll also explain how a modern CNN works. To explain how tasks are solved, we’ll walk through what goes on inside the model to output useful predictions. Wav2Vec2 for audio classification and automatic speech recognition (ASR) Vision Transformer (ViT) and ConvNeXT for image classification DETR for object detection Mask2Former for image segmentation GLPN for depth estimation BERT for NLP tasks like text classification, token classification and question answering that use an encoder GPT2 for NLP tasks like text generation that use a decoder BART for NLP tasks like summarization and translation that use an encoder-decoder Before you go further, it is good to have some basic knowledge of the original Transformer architecture. Knowing how encoders, decoders, and attention work will aid you in understanding how different Transformer models work. If you’re just getting started or need a refresher, check out our course for more information! Speech and audio Wav2Vec2 is a self-supervised model pretrained on unlabeled speech data and finetuned on labeled data for audio classification and automatic speech recognition. This model has four main components: A feature encoder takes the raw audio waveform, normalizes it to zero mean and unit variance, and converts it into a sequence of feature vectors that are each 20ms long. Waveforms are continuous by nature, so they can’t be divided into separate units like a sequence of text can be split into words. That’s why the feature vectors are passed to a quantization module, which aims to learn discrete speech units. The speech unit is chosen from a collection of codewords, known as a codebook (you can think of this as the vocabulary). From the codebook, the vector or speech unit, that best represents the continuous audio input is chosen and forwarded through the model. About half of the feature vectors are randomly masked, and the masked feature vector is fed to a context network, which is a Transformer encoder that also adds relative positional embeddings. The pretraining objective of the context network is a contrastive task. The model has to predict the true quantized speech representation of the masked prediction from a set of false ones, encouraging the model to find the most similar context vector and quantized speech unit (the target label). Now that wav2vec2 is pretrained, you can finetune it on your data for audio classification or automatic speech recognition! Audio classification To use the pretrained model for audio classification, add a sequence classification head on top of the base Wav2Vec2 model. The classification head is a linear layer that accepts the encoder’s hidden states. The hidden states represent the learned features from each audio frame which can have varying lengths. To create one vector of fixed-length, the hidden states are pooled first and then transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and target to find the most likely class. Ready to try your hand at audio classification? Check out our complete audio classification guide to learn how to finetune Wav2Vec2 and use it for inference! Automatic speech recognition To use the pretrained model for automatic speech recognition, add a language modeling head on top of the base Wav2Vec2 model for connectionist temporal classification (CTC). The language modeling head is a linear layer that accepts the encoder’s hidden states and transforms them into logits. Each logit represents a token class (the number of tokens comes from the task vocabulary). The CTC loss is calculated between the logits and targets to find the most likely sequence of tokens, which are then decoded into a transcription. Ready to try your hand at automatic speech recognition? Check out our complete automatic speech recognition guide to learn how to finetune Wav2Vec2 and use it for inference! Computer vision There are two ways to approach computer vision tasks: Split an image into a sequence of patches and process them in parallel with a Transformer. Use a modern CNN, like ConvNeXT, which relies on convolutional layers but adopts modern network designs. A third approach mixes Transformers with convolutions (for example, Convolutional Vision Transformer or LeViT). We won’t discuss those because they just combine the two approaches we examine here. ViT and ConvNeXT are commonly used for image classification, but for other vision tasks like object detection, segmentation, and depth estimation, we’ll look at DETR, Mask2Former and GLPN, respectively; these models are better suited for those tasks. Image classification ViT and ConvNeXT can both be used for image classification; the main difference is that ViT uses an attention mechanism while ConvNeXT uses convolutions. Transformer ViT replaces convolutions entirely with a pure Transformer architecture. If you’re familiar with the original Transformer, then you’re already most of the way toward understanding ViT. The main change ViT introduced was in how images are fed to a Transformer: An image is split into square non-overlapping patches, each of which gets turned into a vector or patch embedding. The patch embeddings are generated from a convolutional 2D layer which creates the proper input dimensions (which for a base Transformer is 768 values for each patch embedding). If you had a 224x224 pixel image, you could split it into 196 16x16 image patches. Just like how text is tokenized into words, an image is “tokenized” into a sequence of patches. A learnable embedding - a special [CLS] token - is added to the beginning of the patch embeddings just like BERT. The final hidden state of the [CLS] token is used as the input to the attached classification head; other outputs are ignored. This token helps the model learn how to encode a representation of the image. The last thing to add to the patch and learnable embeddings are the position embeddings because the model doesn’t know how the image patches are ordered. The position embeddings are also learnable and have the same size as the patch embeddings. Finally, all of the embeddings are passed to the Transformer encoder. The output, specifically only the output with the [CLS] token, is passed to a multilayer perceptron head (MLP). ViT’s pretraining objective is simply classification. Like other classification heads, the MLP head converts the output into logits over the class labels and calculates the cross-entropy loss to find the most likely class. Ready to try your hand at image classification? Check out our complete image classification guide to learn how to finetune ViT and use it for inference! CNN This section briefly explains convolutions, but it’d be helpful to have a prior understanding of how they change an image’s shape and size. If you’re unfamiliar with convolutions, check out the Convolution Neural Networks chapter from the fastai book! ConvNeXT is a CNN architecture that adopts new and modern network designs to improve performance. However, convolutions are still at the core of the model. From a high-level perspective, a convolution is an operation where a smaller matrix (kernel) is multiplied by a small window of the image pixels. It computes some features from it, such as a particular texture or curvature of a line. Then it slides over to the next window of pixels; the distance the convolution travels is known as the stride. A basic convolution without padding or stride, taken from A guide to convolution arithmetic for deep learning. You can feed this output to another convolutional layer, and with each successive layer, the network learns more complex and abstract things like hotdogs or rockets. Between convolutional layers, it is common to add a pooling layer to reduce dimensionality and make the model more robust to variations of a feature’s position. ConvNeXT modernizes a CNN in five ways: Change the number of blocks in each stage and “patchify” an image with a larger stride and corresponding kernel size. The non-overlapping sliding window makes this patchifying strategy similar to how ViT splits an image into patches. A bottleneck layer shrinks the number of channels and then restores it because it is faster to do a 1x1 convolution, and you can increase the depth. An inverted bottleneck does the opposite by expanding the number of channels and shrinking them, which is more memory efficient. Replace the typical 3x3 convolutional layer in the bottleneck layer with depthwise convolution, which applies a convolution to each input channel separately and then stacks them back together at the end. This widens the network width for improved performance. ViT has a global receptive field which means it can see more of an image at once thanks to its attention mechanism. ConvNeXT attempts to replicate this effect by increasing the kernel size to 7x7. ConvNeXT also makes several layer design changes that imitate Transformer models. There are fewer activation and normalization layers, the activation function is switched to GELU instead of ReLU, and it uses LayerNorm instead of BatchNorm. The output from the convolution blocks is passed to a classification head which converts the outputs into logits and calculates the cross-entropy loss to find the most likely label. Object detection DETR, DEtection TRansformer, is an end-to-end object detection model that combines a CNN with a Transformer encoder-decoder. A pretrained CNN backbone takes an image, represented by its pixel values, and creates a low-resolution feature map of it. A 1x1 convolution is applied to the feature map to reduce dimensionality and it creates a new feature map with a high-level image representation. Since the Transformer is a sequential model, the feature map is flattened into a sequence of feature vectors that are combined with positional embeddings. The feature vectors are passed to the encoder, which learns the image representations using its attention layers. Next, the encoder hidden states are combined with object queries in the decoder. Object queries are learned embeddings that focus on the different regions of an image, and they’re updated as they progress through each attention layer. The decoder hidden states are passed to a feedforward network that predicts the bounding box coordinates and class label for each object query, or no object if there isn’t one. DETR decodes each object query in parallel to output N final predictions, where N is the number of queries. Unlike a typical autoregressive model that predicts one element at a time, object detection is a set prediction task (bounding box, class label) that makes N predictions in a single pass. DETR uses a bipartite matching loss during training to compare a fixed number of predictions with a fixed set of ground truth labels. If there are fewer ground truth labels in the set of N labels, then they’re padded with a no object class. This loss function encourages DETR to find a one-to-one assignment between the predictions and ground truth labels. If either the bounding boxes or class labels aren’t correct, a loss is incurred. Likewise, if DETR predicts an object that doesn’t exist, it is penalized. This encourages DETR to find other objects in an image instead of focusing on one really prominent object. An object detection head is added on top of DETR to find the class label and the coordinates of the bounding box. There are two components to the object detection head: a linear layer to transform the decoder hidden states into logits over the class labels, and a MLP to predict the bounding box. Ready to try your hand at object detection? Check out our complete object detection guide to learn how to finetune DETR and use it for inference! Image segmentation Mask2Former is a universal architecture for solving all types of image segmentation tasks. Traditional segmentation models are typically tailored towards a particular subtask of image segmentation, like instance, semantic or panoptic segmentation. Mask2Former frames each of those tasks as a mask classification problem. Mask classification groups pixels into N segments, and predicts N masks and their corresponding class label for a given image. We’ll explain how Mask2Former works in this section, and then you can try finetuning SegFormer at the end. There are three main components to Mask2Former: A Swin backbone accepts an image and creates a low-resolution image feature map from 3 consecutive 3x3 convolutions. The feature map is passed to a pixel decoder which gradually upsamples the low-resolution features into high-resolution per-pixel embeddings. The pixel decoder actually generates multi-scale features (contains both low- and high-resolution features) with resolutions 1/32, 1/16, and 1/8th of the original image. Each of these feature maps of differing scales is fed successively to one Transformer decoder layer at a time in order to capture small objects from the high-resolution features. The key to Mask2Former is the masked attention mechanism in the decoder. Unlike cross-attention which can attend to the entire image, masked attention only focuses on a certain area of the image. This is faster and leads to better performance because the local features of an image are enough for the model to learn from. Like DETR, Mask2Former also uses learned object queries and combines them with the image features from the pixel decoder to make a set prediction (class label, mask prediction). The decoder hidden states are passed into a linear layer and transformed into logits over the class labels. The cross-entropy loss is calculated between the logits and class label to find the most likely one. The mask predictions are generated by combining the pixel-embeddings with the final decoder hidden states. The sigmoid cross-entropy and dice loss is calculated between the logits and the ground truth mask to find the most likely mask. Ready to try your hand at object detection? Check out our complete image segmentation guide to learn how to finetune SegFormer and use it for inference! Depth estimation GLPN, Global-Local Path Network, is a Transformer for depth estimation that combines a SegFormer encoder with a lightweight decoder. Like ViT, an image is split into a sequence of patches, except these image patches are smaller. This is better for dense prediction tasks like segmentation or depth estimation. The image patches are transformed into patch embeddings (see the image classification section for more details about how patch embeddings are created), which are fed to the encoder. The encoder accepts the patch embeddings, and passes them through several encoder blocks. Each block consists of attention and Mix-FFN layers. The purpose of the latter is to provide positional information. At the end of each encoder block is a patch merging layer for creating hierarchical representations. The features of each group of neighboring patches are concatenated, and a linear layer is applied to the concatenated features to reduce the number of patches to a resolution of 1/4. This becomes the input to the next encoder block, where this whole process is repeated until you have image features with resolutions of 1/8, 1/16, and 1/32. A lightweight decoder takes the last feature map (1/32 scale) from the encoder and upsamples it to 1/16 scale. From here, the feature is passed into a Selective Feature Fusion (SFF) module, which selects and combines local and global features from an attention map for each feature and then upsamples it to 1/8th. This process is repeated until the decoded features are the same size as the original image. The output is passed through two convolution layers and then a sigmoid activation is applied to predict the depth of each pixel. Natural language processing The Transformer was initially designed for machine translation, and since then, it has practically become the default architecture for solving all NLP tasks. Some tasks lend themselves to the Transformer’s encoder structure, while others are better suited for the decoder. Still, other tasks make use of both the Transformer’s encoder-decoder structure. Text classification BERT is an encoder-only model and is the first model to effectively implement deep bidirectionality to learn richer representations of the text by attending to words on both sides. BERT uses WordPiece tokenization to generate a token embedding of the text. To tell the difference between a single sentence and a pair of sentences, a special [SEP] token is added to differentiate them. A special [CLS] token is added to the beginning of every sequence of text. The final output with the [CLS] token is used as the input to the classification head for classification tasks. BERT also adds a segment embedding to denote whether a token belongs to the first or second sentence in a pair of sentences. BERT is pretrained with two objectives: masked language modeling and next-sentence prediction. In masked language modeling, some percentage of the input tokens are randomly masked, and the model needs to predict these. This solves the issue of bidirectionality, where the model could cheat and see all the words and “predict” the next word. The final hidden states of the predicted mask tokens are passed to a feedforward network with a softmax over the vocabulary to predict the masked word. The second pretraining object is next-sentence prediction. The model must predict whether sentence B follows sentence A. Half of the time sentence B is the next sentence, and the other half of the time, sentence B is a random sentence. The prediction, whether it is the next sentence or not, is passed to a feedforward network with a softmax over the two classes (IsNext and NotNext). The input embeddings are passed through multiple encoder layers to output some final hidden states. To use the pretrained model for text classification, add a sequence classification head on top of the base BERT model. The sequence classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and target to find the most likely label. Ready to try your hand at text classification? Check out our complete text classification guide to learn how to finetune DistilBERT and use it for inference! Token classification To use BERT for token classification tasks like named entity recognition (NER), add a token classification head on top of the base BERT model. The token classification head is a linear layer that accepts the final hidden states and performs a linear transformation to convert them into logits. The cross-entropy loss is calculated between the logits and each token to find the most likely label. Ready to try your hand at token classification? Check out our complete token classification guide to learn how to finetune DistilBERT and use it for inference! Question answering To use BERT for question answering, add a span classification head on top of the base BERT model. This linear layer accepts the final hidden states and performs a linear transformation to compute the span start and end logits corresponding to the answer. The cross-entropy loss is calculated between the logits and the label position to find the most likely span of text corresponding to the answer. Ready to try your hand at question answering? Check out our complete question answering guide to learn how to finetune DistilBERT and use it for inference! 💡 Notice how easy it is to use BERT for different tasks once it’s been pretrained. You only need to add a specific head to the pretrained model to manipulate the hidden states into your desired output! Text generation GPT-2 is a decoder-only model pretrained on a large amount of text. It can generate convincing (though not always true!) text given a prompt and complete other NLP tasks like question answering despite not being explicitly trained to. GPT-2 uses byte pair encoding (BPE) to tokenize words and generate a token embedding. Positional encodings are added to the token embeddings to indicate the position of each token in the sequence. The input embeddings are passed through multiple decoder blocks to output some final hidden state. Within each decoder block, GPT-2 uses a masked self-attention layer which means GPT-2 can’t attend to future tokens. It is only allowed to attend to tokens on the left. This is different from BERT’s mask token because, in masked self-attention, an attention mask is used to set the score to 0 for future tokens. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The label is the next token in the sequence, which are created by shifting the logits to the right by one. The cross-entropy loss is calculated between the shifted logits and the labels to output the next most likely token. GPT-2’s pretraining objective is based entirely on causal language modeling, predicting the next word in a sequence. This makes GPT-2 especially good at tasks that involve generating text. Ready to try your hand at text generation? Check out our complete causal language modeling guide to learn how to finetune DistilGPT-2 and use it for inference! For more information about text generation, check out the text generation strategies guide! Summarization Encoder-decoder models like BART and T5 are designed for the sequence-to-sequence pattern of a summarization task. We’ll explain how BART works in this section, and then you can try finetuning T5 at the end. BART’s encoder architecture is very similar to BERT and accepts a token and positional embedding of the text. BART is pretrained by corrupting the input and then reconstructing it with the decoder. Unlike other encoders with specific corruption strategies, BART can apply any type of corruption. The text infilling corruption strategy works the best though. In text infilling, a number of text spans are replaced with a single mask token. This is important because the model has to predict the masked tokens, and it teaches the model to predict the number of missing tokens. The input embeddings and masked spans are passed through the encoder to output some final hidden states, but unlike BERT, BART doesn’t add a final feedforward network at the end to predict a word. The encoder’s output is passed to the decoder, which must predict the masked tokens and any uncorrupted tokens from the encoder’s output. This gives additional context to help the decoder restore the original text. The output from the decoder is passed to a language modeling head, which performs a linear transformation to convert the hidden states into logits. The cross-entropy loss is calculated between the logits and the label, which is just the token shifted to the right. Ready to try your hand at summarization? Check out our complete summarization guide to learn how to finetune T5 and use it for inference! For more information about text generation, check out the text generation strategies guide! Translation Translation is another example of a sequence-to-sequence task, which means you can use an encoder-decoder model like BART or T5 to do it. We’ll explain how BART works in this section, and then you can try finetuning T5 at the end. BART adapts to translation by adding a separate randomly initialized encoder to map a source language to an input that can be decoded into the target language. This new encoder’s embeddings are passed to the pretrained encoder instead of the original word embeddings. The source encoder is trained by updating the source encoder, positional embeddings, and input embeddings with the cross-entropy loss from the model output. The model parameters are frozen in this first step, and all the model parameters are trained together in the second step. BART has since been followed up by a multilingual version, mBART, intended for translation and pretrained on many different languages. Ready to try your hand at translation? Check out our complete translation guide to learn how to finetune T5 and use it for inference! For more information about text generation, check out the text generation strategies guide!
https://huggingface.co/docs/transformers/model_doc/clip
CLIP Overview The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. The abstract from the paper is the following: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at this https URL. Usage CLIP is a multi-modal vision and language model. It can be used for image-text similarity and for zero-shot image classification. CLIP uses a ViT like transformer to get visual features and a causal language model to get the text features. Both the text and visual features are then projected to a latent space with identical dimension. The dot product between the projected image and text features is then used as a similar score. To feed images to the Transformer encoder, each image is split into a sequence of fixed-size non-overlapping patches, which are then linearly embedded. A [CLS] token is added to serve as representation of an entire image. The authors also add absolute position embeddings, and feed the resulting sequence of vectors to a standard Transformer encoder. The CLIPImageProcessor can be used to resize (or rescale) and normalize images for the model. The CLIPTokenizer is used to encode the text. The CLIPProcessor wraps CLIPImageProcessor and CLIPTokenizer into a single instance to both encode the text and prepare the images. The following example shows how to get the image-text similarity scores using CLIPProcessor and CLIPModel. >>> from PIL import Image >>> import requests >>> from transformers import CLIPProcessor, CLIPModel >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) This model was contributed by valhalla. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CLIP. A blog post on How to fine-tune CLIP on 10,000 image-text pairs. CLIP is supported by this example script. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. CLIPConfig class transformers.CLIPConfig < source > ( text_config = None vision_config = None projection_dim = 512 logit_scale_init_value = 2.6592 **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize CLIPTextConfig. vision_config (dict, optional) — Dictionary of configuration options used to initialize CLIPVisionConfig. projection_dim (int, optional, defaults to 512) — Dimentionality of text and vision projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original CLIP implementation. kwargs (optional) — Dictionary of keyword arguments. CLIPConfig is the configuration class to store the configuration of a CLIPModel. It is used to instantiate a CLIP model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the CLIP openai/clip-vit-base-patch32 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CLIPConfig, CLIPModel >>> >>> configuration = CLIPConfig() >>> >>> model = CLIPModel(configuration) >>> >>> configuration = model.config >>> >>> from transformers import CLIPTextConfig, CLIPVisionConfig >>> >>> config_text = CLIPTextConfig() >>> config_vision = CLIPVisionConfig() >>> config = CLIPConfig.from_text_vision_configs(config_text, config_vision) from_text_vision_configs < source > ( text_config: CLIPTextConfig vision_config: CLIPVisionConfig **kwargs ) → CLIPConfig An instance of a configuration object Instantiate a CLIPConfig (or a derived class) from clip text model configuration and clip vision model configuration. CLIPTextConfig class transformers.CLIPTextConfig < source > ( vocab_size = 49408 hidden_size = 512 intermediate_size = 2048 projection_dim = 512 num_hidden_layers = 12 num_attention_heads = 8 max_position_embeddings = 77 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 pad_token_id = 1 bos_token_id = 49406 eos_token_id = 49407 **kwargs ) Parameters vocab_size (int, optional, defaults to 49408) — Vocabulary size of the CLIP text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling CLIPModel. hidden_size (int, optional, defaults to 512) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 2048) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (int, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" "quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (float, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a CLIPTextModel. It is used to instantiate a CLIP text encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the text encoder of the CLIP openai/clip-vit-base-patch32 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CLIPTextConfig, CLIPTextModel >>> >>> configuration = CLIPTextConfig() >>> >>> model = CLIPTextModel(configuration) >>> >>> configuration = model.config CLIPVisionConfig class transformers.CLIPVisionConfig < source > ( hidden_size = 768 intermediate_size = 3072 projection_dim = 512 num_hidden_layers = 12 num_attention_heads = 12 num_channels = 3 image_size = 224 patch_size = 32 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 32) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (float, optional, defaults to 1) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a CLIPVisionModel. It is used to instantiate a CLIP vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the vision encoder of the CLIP openai/clip-vit-base-patch32 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CLIPVisionConfig, CLIPVisionModel >>> >>> configuration = CLIPVisionConfig() >>> >>> model = CLIPVisionModel(configuration) >>> >>> configuration = model.config CLIPTokenizer class transformers.CLIPTokenizer < source > ( vocab_file merges_file errors = 'replace' unk_token = '<|endoftext|>' bos_token = '<|startoftext|>' eos_token = '<|endoftext|>' pad_token = '<|endoftext|>' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|startoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. Construct a CLIP tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CLIP sequence has the following format: single sequence: <|startoftext|> X <|endoftext|> Pairs of sequences are not the expected use case, but they will be handled without a separator. get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed. CLIP does not make use of token type ids, therefore a list of zeros is returned. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) CLIPTokenizerFast class transformers.CLIPTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<|endoftext|>' bos_token = '<|startoftext|>' eos_token = '<|endoftext|>' pad_token = '<|endoftext|>' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|startoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. Construct a “fast” CLIP tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CLIP sequence has the following format: single sequence: <|startoftext|> X <|endoftext|> Pairs of sequences are not the expected use case, but they will be handled without a separator. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of zeros. Create a mask from the two sequences passed. CLIP does not make use of token type ids, therefore a list of zeros is returned. CLIPImageProcessor class transformers.CLIPImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True crop_size: typing.Dict[str, int] = None do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by do_resize in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}): Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method. do_center_crop (bool, optional, defaults to True) — Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the preprocess method. crop_size (Dict[str, int] optional, defaults to 224) — Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess method. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess method. do_normalize — Whether to normalize the image. Can be overridden by do_normalize in the preprocess method. image_mean (float or List[float], optional, defaults to [0.48145466, 0.4578275, 0.40821073]) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to [0.26862954, 0.26130258, 0.27577711]) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Can be overridden by the image_std parameter in the preprocess method. do_convert_rgb (bool, optional, defaults to True) — Whether to convert the image to RGB. Constructs a CLIP image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None resample: Resampling = None do_center_crop: bool = None crop_size: int = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_convert_rgb: bool = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. Shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the center crop. Only has an effect if do_center_crop is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to use for normalization. Only has an effect if do_normalize is set to True. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to use for normalization. Only has an effect if do_normalize is set to True. do_convert_rgb (bool, optional, defaults to self.do_convert_rgb) — Whether to convert the image to RGB. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. CLIPFeatureExtractor CLIPProcessor class transformers.CLIPProcessor < source > ( image_processor = None tokenizer = None **kwargs ) Parameters image_processor (CLIPImageProcessor) — The image processor is a required input. tokenizer (CLIPTokenizerFast) — The tokenizer is a required input. Constructs a CLIP processor which wraps a CLIP image processor and a CLIP tokenizer into a single processor. CLIPProcessor offers all the functionalities of CLIPImageProcessor and CLIPTokenizerFast. See the __call__() and decode() for more information. This method forwards all its arguments to CLIPTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to CLIPTokenizerFast’s decode(). Please refer to the docstring of this method for more information. CLIPModel class transformers.CLIPModel < source > ( config: CLIPConfig ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_clip.CLIPOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clip.modeling_clip.CLIPOutput or tuple(torch.FloatTensor) A transformers.models.clip.modeling_clip.CLIPOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image:(torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of CLIPTextModel. image_embeds(torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of CLIPVisionModel. text_model_output(BaseModelOutputWithPooling): The output of the CLIPTextModel. vision_model_output(BaseModelOutputWithPooling): The output of the CLIPVisionModel. The CLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPModel >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (torch.FloatTensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of CLIPTextModel. The CLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, CLIPModel >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns image_features (torch.FloatTensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of CLIPVisionModel. The CLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPModel >>> model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> image_features = model.get_image_features(**inputs) CLIPTextModel class transformers.CLIPTextModel < source > ( config: CLIPTextConfig ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The text model from CLIP without any head or projection on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CLIPTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, CLIPTextModel >>> model = CLIPTextModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output CLIPTextModelWithProjection class transformers.CLIPTextModelWithProjection < source > ( config: CLIPTextConfig ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CLIP Text Model with a projection layer on top (a linear layer on top of the pooled output). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_clip.CLIPTextModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clip.modeling_clip.CLIPTextModelOutput or tuple(torch.FloatTensor) A transformers.models.clip.modeling_clip.CLIPTextModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs. text_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The text embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CLIPTextModelWithProjection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, CLIPTextModelWithProjection >>> model = CLIPTextModelWithProjection.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> text_embeds = outputs.text_embeds CLIPVisionModelWithProjection class transformers.CLIPVisionModelWithProjection < source > ( config: CLIPVisionConfig ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CLIP Vision Model with a projection layer on top (a linear layer on top of the pooled output). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_clip.CLIPVisionModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clip.modeling_clip.CLIPVisionModelOutput or tuple(torch.FloatTensor) A transformers.models.clip.modeling_clip.CLIPVisionModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs. image_embeds (torch.FloatTensor of shape (batch_size, output_dim) optional returned when model is initialized with with_projection=True) — The image embeddings obtained by applying the projection layer to the pooler_output. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CLIPVisionModelWithProjection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPVisionModelWithProjection >>> model = CLIPVisionModelWithProjection.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> image_embeds = outputs.image_embeds CLIPVisionModel class transformers.CLIPVisionModel < source > ( config: CLIPVisionConfig ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The vision model from CLIP without any head or projection on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CLIPVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, CLIPVisionModel >>> model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output TFCLIPModel class transformers.TFCLIPModel < source > ( *args **kwargs ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None pixel_values: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None return_loss: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.models.clip.modeling_tf_clip.TFCLIPOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns transformers.models.clip.modeling_tf_clip.TFCLIPOutput or tuple(tf.Tensor) A transformers.models.clip.modeling_tf_clip.TFCLIPOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs. loss (tf.Tensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image:(tf.Tensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(tf.Tensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(tf.Tensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of TFCLIPTextModel. image_embeds(tf.Tensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of TFCLIPVisionModel. text_model_output(~modeling_tf_utils.TFBaseModelOutputWithPooling): The output of the TFCLIPTextModel. vision_model_output(~modeling_tf_utils.TFBaseModelOutputWithPooling): The output of the TFCLIPVisionModel. The TFCLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import tensorflow as tf >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFCLIPModel >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = tf.nn.softmax(logits_per_image, axis=1) get_text_features < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → text_features (tf.Tensor of shape (batch_size, output_dim) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns text_features (tf.Tensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of TFCLIPTextModel. The TFCLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, TFCLIPModel >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: TFModelInputType | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → image_features (tf.Tensor of shape (batch_size, output_dim) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional): Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns image_features (tf.Tensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of TFCLIPVisionModel. The TFCLIPModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFCLIPModel >>> model = TFCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="tf") >>> image_features = model.get_image_features(**inputs) TFCLIPTextModel class transformers.TFCLIPTextModel < source > ( *args **kwargs ) call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCLIPTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, TFCLIPTextModel >>> model = TFCLIPTextModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output TFCLIPVisionModel class transformers.TFCLIPVisionModel < source > ( *args **kwargs ) call < source > ( pixel_values: TFModelInputType | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional): Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCLIPVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFCLIPVisionModel >>> model = TFCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output FlaxCLIPModel class transformers.FlaxCLIPModel < source > ( config: CLIPConfig input_shape: typing.Optional[typing.Tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (CLIPConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids pixel_values attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or tuple(torch.FloatTensor) A transformers.models.clip.modeling_flax_clip.FlaxCLIPOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPConfig'>) and inputs. logits_per_image:(jnp.ndarray of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text:(jnp.ndarray of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. text_embeds(jnp.ndarray of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPTextModel. image_embeds(jnp.ndarray of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPVisionModel. text_model_output(FlaxBaseModelOutputWithPooling): The output of the FlaxCLIPTextModel. vision_model_output(FlaxBaseModelOutputWithPooling): The output of the FlaxCLIPVisionModel. The FlaxCLIPPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import jax >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, FlaxCLIPModel >>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="np", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = jax.nn.softmax(logits_per_image, axis=1) get_text_features < source > ( input_ids attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train = False ) → text_features (jnp.ndarray of shape (batch_size, output_dim) Examples: >>> from transformers import AutoTokenizer, FlaxCLIPModel >>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="np") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values params: dict = None dropout_rng: PRNGKey = None train = False ) → image_features (jnp.ndarray of shape (batch_size, output_dim) Parameters pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. Returns image_features (jnp.ndarray of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPVisionModel Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, FlaxCLIPModel >>> model = FlaxCLIPModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="np") >>> image_features = model.get_image_features(**inputs) FlaxCLIPTextModel class transformers.FlaxCLIPTextModel < source > ( config: CLIPTextConfig input_shape = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) __call__ < source > ( input_ids attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxCLIPTextPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxCLIPTextModel >>> model = FlaxCLIPTextModel.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="np") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooler_output = outputs.pooler_output FlaxCLIPTextModelWithProjection class transformers.FlaxCLIPTextModelWithProjection < source > ( config: CLIPTextConfig input_shape = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) __call__ < source > ( input_ids attention_mask = None position_ids = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.clip.modeling_flax_clip.FlaxCLIPTextModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.clip.modeling_flax_clip.FlaxCLIPTextModelOutput or tuple(torch.FloatTensor) A transformers.models.clip.modeling_flax_clip.FlaxCLIPTextModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPTextConfig'>) and inputs. text_embeds (jnp.ndarray of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of FlaxCLIPTextModel. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxCLIPTextPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxCLIPTextModelWithProjection >>> model = FlaxCLIPTextModelWithProjection.from_pretrained("openai/clip-vit-base-patch32") >>> tokenizer = AutoTokenizer.from_pretrained("openai/clip-vit-base-patch32") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="np") >>> outputs = model(**inputs) >>> text_embeds = outputs.text_embeds FlaxCLIPVisionModel class transformers.FlaxCLIPVisionModel < source > ( config: CLIPVisionConfig input_shape: typing.Optional[typing.Tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) __call__ < source > ( pixel_values params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (numpy.ndarray of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.clip.configuration_clip.CLIPVisionConfig'>) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxCLIPVisionPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, FlaxCLIPVisionModel >>> model = FlaxCLIPVisionModel.from_pretrained("openai/clip-vit-base-patch32") >>> processor = AutoProcessor.from_pretrained("openai/clip-vit-base-patch32") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="np") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooler_output = outputs.pooler_output
https://huggingface.co/docs/transformers/model_doc/canine
CANINE Overview The CANINE model was proposed in CANINE: Pre-training an Efficient Tokenization-Free Encoder for Language Representation by Jonathan H. Clark, Dan Garrette, Iulia Turc, John Wieting. It’s among the first papers that trains a Transformer without using an explicit tokenization step (such as Byte Pair Encoding (BPE), WordPiece or SentencePiece). Instead, the model is trained directly at a Unicode character-level. Training at a character-level inevitably comes with a longer sequence length, which CANINE solves with an efficient downsampling strategy, before applying a deep Transformer encoder. The abstract from the paper is the following: Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearly all commonly-used models still require an explicit tokenization step. While recent tokenization approaches based on data-derived subword lexicons are less brittle than manually engineered tokenizers, these techniques are not equally suited to all languages, and the use of any fixed vocabulary may limit a model’s ability to adapt. In this paper, we present CANINE, a neural encoder that operates directly on character sequences, without explicit tokenization or vocabulary, and a pre-training strategy that operates either directly on characters or optionally uses subwords as a soft inductive bias. To use its finer-grained input effectively and efficiently, CANINE combines downsampling, which reduces the input sequence length, with a deep transformer stack, which encodes context. CANINE outperforms a comparable mBERT model by 2.8 F1 on TyDi QA, a challenging multilingual benchmark, despite having 28% fewer model parameters. Tips: CANINE uses no less than 3 Transformer encoders internally: 2 “shallow” encoders (which only consist of a single layer) and 1 “deep” encoder (which is a regular BERT encoder). First, a “shallow” encoder is used to contextualize the character embeddings, using local attention. Next, after downsampling, a “deep” encoder is applied. Finally, after upsampling, a “shallow” encoder is used to create the final character embeddings. Details regarding up- and downsampling can be found in the paper. CANINE uses a max sequence length of 2048 characters by default. One can use CanineTokenizer to prepare text for the model. Classification can be done by placing a linear layer on top of the final hidden state of the special [CLS] token (which has a predefined Unicode code point). For token classification tasks however, the downsampled sequence of tokens needs to be upsampled again to match the length of the original character sequence (which is 2048). The details for this can be found in the paper. Models: google/canine-c: Pre-trained with autoregressive character loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). google/canine-s: Pre-trained with subword loss, 12-layer, 768-hidden, 12-heads, 121M parameters (size ~500 MB). This model was contributed by nielsr. The original code can be found here. Example CANINE works on raw characters, so it can be used without a tokenizer: >>> from transformers import CanineModel >>> import torch >>> model = CanineModel.from_pretrained("google/canine-c") >>> text = "hello world" >>> >>> input_ids = torch.tensor([[ord(char) for char in text]]) >>> outputs = model(input_ids) >>> pooled_output = outputs.pooler_output >>> sequence_output = outputs.last_hidden_state For batched inference and training, it is however recommended to make use of the tokenizer (to pad/truncate all sequences to the same length): >>> from transformers import CanineTokenizer, CanineModel >>> model = CanineModel.from_pretrained("google/canine-c") >>> tokenizer = CanineTokenizer.from_pretrained("google/canine-c") >>> inputs = ["Life is like a box of chocolates.", "You never know what you gonna get."] >>> encoding = tokenizer(inputs, padding="longest", truncation=True, return_tensors="pt") >>> outputs = model(**encoding) >>> pooled_output = outputs.pooler_output >>> sequence_output = outputs.last_hidden_state Documentation resources Text classification task guide Token classification task guide Question answering task guide Multiple choice task guide CANINE specific outputs class transformers.models.canine.modeling_canine.CanineModelOutputWithPooling < source > ( last_hidden_state: FloatTensor = None pooler_output: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model (i.e. the output of the final shallow Transformer encoder). pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Hidden-state of the first token of the sequence (classification token) at the last layer of the deep Transformer encoder, further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the input to each encoder + one for the output of each layer of each encoder) of shape (batch_size, sequence_length, hidden_size) and (batch_size, sequence_length // config.downsampling_rate, hidden_size). Hidden-states of the model at the output of each layer plus the initial input to each Transformer encoder. The hidden states of the shallow encoders have length sequence_length, but the hidden states of the deep encoder have length sequence_length // config.downsampling_rate. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of the 3 Transformer encoders of shape (batch_size, num_heads, sequence_length, sequence_length) and (batch_size, num_heads, sequence_length // config.downsampling_rate, sequence_length // config.downsampling_rate). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of CanineModel. Based on BaseModelOutputWithPooling, but with slightly different hidden_states and attentions, as these also include the hidden states and attentions of the shallow Transformer encoders. CanineConfig class transformers.CanineConfig < source > ( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 16384 type_vocab_size = 16 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 bos_token_id = 57344 eos_token_id = 57345 downsampling_rate = 4 upsampling_kernel_size = 4 num_hash_functions = 8 num_hash_buckets = 16384 local_transformer_stride = 128 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimension of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the deep Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoders. intermediate_size (int, optional, defaults to 3072) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoders. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoders, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 16384) — The maximum sequence length that this model might ever be used with. type_vocab_size (int, optional, defaults to 16) — The vocabulary size of the token_type_ids passed when calling CanineModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. downsampling_rate (int, optional, defaults to 4) — The rate at which to downsample the original character sequence length before applying the deep Transformer encoder. upsampling_kernel_size (int, optional, defaults to 4) — The kernel size (i.e. the number of characters in each window) of the convolutional projection layer when projecting back from hidden_size*2 to hidden_size. num_hash_functions (int, optional, defaults to 8) — The number of hash functions to use. Each hash function has its own embedding matrix. num_hash_buckets (int, optional, defaults to 16384) — The number of hash buckets to use. local_transformer_stride (int, optional, defaults to 128) — The stride of the local attention of the first shallow Transformer encoder. Defaults to 128 for good TPU/XLA memory alignment. This is the configuration class to store the configuration of a CanineModel. It is used to instantiate an CANINE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CANINE google/canine-s architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CanineConfig, CanineModel >>> >>> configuration = CanineConfig() >>> >>> model = CanineModel(configuration) >>> >>> configuration = model.config CanineTokenizer class transformers.CanineTokenizer < source > ( bos_token = '\ue000' eos_token = '\ue001' sep_token = '\ue001' cls_token = '\ue000' pad_token = '\x00' mask_token = '\ue003' add_prefix_space = False model_max_length = 2048 **kwargs ) Parameters model_max_length (int, optional, defaults to 2048) — The maximum sentence length the model accepts. Construct a CANINE tokenizer (i.e. a character splitter). It turns text into a sequence of characters, and then converts each character into its Unicode code point. CanineTokenizer inherits from PreTrainedTokenizer. Refer to superclass PreTrainedTokenizer for usage examples and documentation concerning parameters. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CANINE sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A CANINE sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). CanineModel class transformers.CanineModel < source > ( config add_pooling_layer = True ) Parameters config (CanineConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare CANINE Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.canine.modeling_canine.CanineModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.models.canine.modeling_canine.CanineModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CanineConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model (i.e. the output of the final shallow Transformer encoder). pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Hidden-state of the first token of the sequence (classification token) at the last layer of the deep Transformer encoder, further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the input to each encoder + one for the output of each layer of each encoder) of shape (batch_size, sequence_length, hidden_size) and (batch_size, sequence_length // config.downsampling_rate, hidden_size). Hidden-states of the model at the output of each layer plus the initial input to each Transformer encoder. The hidden states of the shallow encoders have length sequence_length, but the hidden states of the deep encoder have length sequence_length // config.downsampling_rate. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of the 3 Transformer encoders of shape (batch_size, num_heads, sequence_length, sequence_length) and (batch_size, num_heads, sequence_length // config.downsampling_rate, sequence_length // config.downsampling_rate). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CanineModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CanineModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/canine-s") >>> model = CanineModel.from_pretrained("google/canine-s") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state CanineForSequenceClassification class transformers.CanineForSequenceClassification < source > ( config ) Parameters config (CanineConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CANINE Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CanineConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CanineForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, CanineForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/canine-s") >>> model = CanineForSequenceClassification.from_pretrained("google/canine-s") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = CanineForSequenceClassification.from_pretrained("google/canine-s", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, CanineForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/canine-s") >>> model = CanineForSequenceClassification.from_pretrained("google/canine-s", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = CanineForSequenceClassification.from_pretrained( ... "google/canine-s", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss CanineForMultipleChoice class transformers.CanineForMultipleChoice < source > ( config ) Parameters config (CanineConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CANINE Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CanineConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CanineForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CanineForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/canine-s") >>> model = CanineForMultipleChoice.from_pretrained("google/canine-s") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits CanineForTokenClassification class transformers.CanineForTokenClassification < source > ( config ) Parameters config (CanineConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CANINE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CanineConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CanineForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CanineForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/canine-s") >>> model = CanineForTokenClassification.from_pretrained("google/canine-s") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> predicted_tokens_classes >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) CanineForQuestionAnswering class transformers.CanineForQuestionAnswering < source > ( config ) Parameters config (CanineConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CANINE Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CanineConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CanineForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CanineForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Splend1dchan/canine-c-squad") >>> model = CanineForQuestionAnswering.from_pretrained("Splend1dchan/canine-c-squad") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True) 'nice puppet' >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss >>> round(loss.item(), 2) 8.81
https://huggingface.co/oleksandryermilov
Oleksandr Yermilov oleksandryermilov olexandryermilov Research interests NLP, data privacy Organizations models None public yet datasets None public yet
https://huggingface.co/docs/transformers/model_summary
The Transformer model family Since its introduction in 2017, the original Transformer model has inspired many new and exciting models that extend beyond natural language processing (NLP) tasks. There are models for predicting the folded structure of proteins, training a cheetah to run, and time series forecasting. With so many Transformer variants available, it can be easy to miss the bigger picture. What all these models have in common is they’re based on the original Transformer architecture. Some models only use the encoder or decoder, while others use both. This provides a useful taxonomy to categorize and examine the high-level differences within models in the Transformer family, and it’ll help you understand Transformers you haven’t encountered before. If you aren’t familiar with the original Transformer model or need a refresher, check out the How do Transformers work chapter from the Hugging Face course. Computer vision Convolutional network For a long time, convolutional networks (CNNs) were the dominant paradigm for computer vision tasks until the Vision Transformer demonstrated its scalability and efficiency. Even then, some of a CNN’s best qualities, like translation invariance, are so powerful (especially for certain tasks) that some Transformers incorporate convolutions in their architecture. ConvNeXt flipped this exchange around and incorporated design choices from Transformers to modernize a CNN. For example, ConvNeXt uses non-overlapping sliding windows to patchify an image and a larger kernel to increase its global receptive field. ConvNeXt also makes several layer design choices to be more memory-efficient and improve performance, so it competes favorably with Transformers! Encoder The Vision Transformer (ViT) opened the door to computer vision tasks without convolutions. ViT uses a standard Transformer encoder, but its main breakthrough was how it treated an image. It splits an image into fixed-size patches and uses them to create an embedding, just like how a sentence is split into tokens. ViT capitalized on the Transformers’ efficient architecture to demonstrate competitive results with the CNNs at the time while requiring fewer resources to train. ViT was soon followed by other vision models that could also handle dense vision tasks like segmentation as well as detection. One of these models is the Swin Transformer. It builds hierarchical feature maps (like a CNN 👀 and unlike ViT) from smaller-sized patches and merges them with neighboring patches in deeper layers. Attention is only computed within a local window, and the window is shifted between attention layers to create connections to help the model learn better. Since the Swin Transformer can produce hierarchical feature maps, it is a good candidate for dense prediction tasks like segmentation and detection. The SegFormer also uses a Transformer encoder to build hierarchical feature maps, but it adds a simple multilayer perceptron (MLP) decoder on top to combine all the feature maps and make a prediction. Other vision models, like BeIT and ViTMAE, drew inspiration from BERT’s pretraining objective. BeIT is pretrained by masked image modeling (MIM); the image patches are randomly masked, and the image is also tokenized into visual tokens. BeIT is trained to predict the visual tokens corresponding to the masked patches. ViTMAE has a similar pretraining objective, except it must predict the pixels instead of visual tokens. What’s unusual is 75% of the image patches are masked! The decoder reconstructs the pixels from the masked tokens and encoded patches. After pretraining, the decoder is thrown away, and the encoder is ready to be used in downstream tasks. Decoder Decoder-only vision models are rare because most vision models rely on an encoder to learn an image representation. But for use cases like image generation, the decoder is a natural fit, as we’ve seen from text generation models like GPT-2. ImageGPT uses the same architecture as GPT-2, but instead of predicting the next token in a sequence, it predicts the next pixel in an image. In addition to image generation, ImageGPT could also be finetuned for image classification. Encoder-decoder Vision models commonly use an encoder (also known as a backbone) to extract important image features before passing them to a Transformer decoder. DETR has a pretrained backbone, but it also uses the complete Transformer encoder-decoder architecture for object detection. The encoder learns image representations and combines them with object queries (each object query is a learned embedding that focuses on a region or object in an image) in the decoder. DETR predicts the bounding box coordinates and class label for each object query. Natural language processing Encoder BERT is an encoder-only Transformer that randomly masks certain tokens in the input to avoid seeing other tokens, which would allow it to “cheat”. The pretraining objective is to predict the masked token based on the context. This allows BERT to fully use the left and right contexts to help it learn a deeper and richer representation of the inputs. However, there was still room for improvement in BERT’s pretraining strategy. RoBERTa improved upon this by introducing a new pretraining recipe that includes training for longer and on larger batches, randomly masking tokens at each epoch instead of just once during preprocessing, and removing the next-sentence prediction objective. The dominant strategy to improve performance is to increase the model size. But training large models is computationally expensive. One way to reduce computational costs is using a smaller model like DistilBERT. DistilBERT uses knowledge distillation - a compression technique - to create a smaller version of BERT while keeping nearly all of its language understanding capabilities. However, most Transformer models continued to trend towards more parameters, leading to new models focused on improving training efficiency. ALBERT reduces memory consumption by lowering the number of parameters in two ways: separating the larger vocabulary embedding into two smaller matrices and allowing layers to share parameters. DeBERTa added a disentangled attention mechanism where the word and its position are separately encoded in two vectors. The attention is computed from these separate vectors instead of a single vector containing the word and position embeddings. Longformer also focused on making attention more efficient, especially for processing documents with longer sequence lengths. It uses a combination of local windowed attention (attention only calculated from fixed window size around each token) and global attention (only for specific task tokens like [CLS] for classification) to create a sparse attention matrix instead of a full attention matrix. Decoder GPT-2 is a decoder-only Transformer that predicts the next word in the sequence. It masks tokens to the right so the model can’t “cheat” by looking ahead. By pretraining on a massive body of text, GPT-2 became really good at generating text, even if the text is only sometimes accurate or true. But GPT-2 lacked the bidirectional context from BERT’s pretraining, which made it unsuitable for certain tasks. XLNET combines the best of both BERT and GPT-2’s pretraining objectives by using a permutation language modeling objective (PLM) that allows it to learn bidirectionally. After GPT-2, language models grew even bigger and are now known as large language models (LLMs). LLMs demonstrate few- or even zero-shot learning if pretrained on a large enough dataset. GPT-J is an LLM with 6B parameters and trained on 400B tokens. GPT-J was followed by OPT, a family of decoder-only models, the largest of which is 175B and trained on 180B tokens. BLOOM was released around the same time, and the largest model in the family has 176B parameters and is trained on 366B tokens in 46 languages and 13 programming languages. Encoder-decoder BART keeps the original Transformer architecture, but it modifies the pretraining objective with text infilling corruption, where some text spans are replaced with a single mask token. The decoder predicts the uncorrupted tokens (future tokens are masked) and uses the encoder’s hidden states to help it. Pegasus is similar to BART, but Pegasus masks entire sentences instead of text spans. In addition to masked language modeling, Pegasus is pretrained by gap sentence generation (GSG). The GSG objective masks whole sentences important to a document, replacing them with a mask token. The decoder must generate the output from the remaining sentences. T5 is a more unique model that casts all NLP tasks into a text-to-text problem using specific prefixes. For example, the prefix Summarize: indicates a summarization task. T5 is pretrained by supervised (GLUE and SuperGLUE) training and self-supervised training (randomly sample and drop out 15% of tokens). Audio Encoder Wav2Vec2 uses a Transformer encoder to learn speech representations directly from raw audio waveforms. It is pretrained with a contrastive task to determine the true speech representation from a set of false ones. HuBERT is similar to Wav2Vec2 but has a different training process. Target labels are created by a clustering step in which segments of similar audio are assigned to a cluster which becomes a hidden unit. The hidden unit is mapped to an embedding to make a prediction. Encoder-decoder Speech2Text is a speech model designed for automatic speech recognition (ASR) and speech translation. The model accepts log mel-filter bank features extracted from the audio waveform and pretrained autoregressively to generate a transcript or translation. Whisper is also an ASR model, but unlike many other speech models, it is pretrained on a massive amount of ✨ labeled ✨ audio transcription data for zero-shot performance. A large chunk of the dataset also contains non-English languages, meaning Whisper can also be used for low-resource languages. Structurally, Whisper is similar to Speech2Text. The audio signal is converted to a log-mel spectrogram encoded by the encoder. The decoder generates the transcript autoregressively from the encoder’s hidden states and the previous tokens. Multimodal Encoder VisualBERT is a multimodal model for vision-language tasks released shortly after BERT. It combines BERT and a pretrained object detection system to extract image features into visual embeddings, passed alongside text embeddings to BERT. VisualBERT predicts the masked text based on the unmasked text and the visual embeddings, and it also has to predict whether the text is aligned with the image. When ViT was released, ViLT adopted ViT in its architecture because it was easier to get the image embeddings this way. The image embeddings are jointly processed with the text embeddings. From there, ViLT is pretrained by image text matching, masked language modeling, and whole word masking. CLIP takes a different approach and makes a pair prediction of (image, text) . An image encoder (ViT) and a text encoder (Transformer) are jointly trained on a 400 million (image, text) pair dataset to maximize the similarity between the image and text embeddings of the (image, text) pairs. After pretraining, you can use natural language to instruct CLIP to predict the text given an image or vice versa. OWL-ViT builds on top of CLIP by using it as its backbone for zero-shot object detection. After pretraining, an object detection head is added to make a set prediction over the (class, bounding box) pairs. Encoder-decoder Optical character recognition (OCR) is a long-standing text recognition task that typically involves several components to understand the image and generate the text. TrOCR simplifies the process using an end-to-end Transformer. The encoder is a ViT-style model for image understanding and processes the image as fixed-size patches. The decoder accepts the encoder’s hidden states and autoregressively generates text. Donut is a more general visual document understanding model that doesn’t rely on OCR-based approaches. It uses a Swin Transformer as the encoder and multilingual BART as the decoder. Donut is pretrained to read text by predicting the next word based on the image and text annotations. The decoder generates a token sequence given a prompt. The prompt is represented by a special token for each downstream task. For example, document parsing has a special parsing token that is combined with the encoder hidden states to parse the document into a structured output format (JSON). Reinforcement learning Decoder The Decision and Trajectory Transformer casts the state, action, and reward as a sequence modeling problem. The Decision Transformer generates a series of actions that lead to a future desired return based on returns-to-go, past states, and actions. For the last K timesteps, each of the three modalities are converted into token embeddings and processed by a GPT-like model to predict a future action token. Trajectory Transformer also tokenizes the states, actions, and rewards and processes them with a GPT architecture. Unlike the Decision Transformer, which is focused on reward conditioning, the Trajectory Transformer generates future actions with beam search.
https://huggingface.co/docs/transformers/tokenizer_summary
Summary of the tokenizers On this page, we will have a closer look at tokenization. As we saw in the preprocessing tutorial, tokenizing a text is splitting it into words or subwords, which then are converted to ids through a look-up table. Converting words or subwords to ids is straightforward, so in this summary, we will focus on splitting a text into words or subwords (i.e. tokenizing a text). More specifically, we will look at the three main types of tokenizers used in 🤗 Transformers: Byte-Pair Encoding (BPE), WordPiece, and SentencePiece, and show examples of which tokenizer type is used by which model. Note that on each model page, you can look at the documentation of the associated tokenizer to know which tokenizer type was used by the pretrained model. For instance, if we look at BertTokenizer, we can see that the model uses WordPiece. Introduction Splitting a text into smaller chunks is a task that is harder than it looks, and there are multiple ways of doing so. For instance, let’s look at the sentence "Don't you love 🤗 Transformers? We sure do." A simple way of tokenizing this text is to split it by spaces, which would give: ["Don't", "you", "love", "🤗", "Transformers?", "We", "sure", "do."] This is a sensible first step, but if we look at the tokens "Transformers?" and "do.", we notice that the punctuation is attached to the words "Transformer" and "do", which is suboptimal. We should take the punctuation into account so that a model does not have to learn a different representation of a word and every possible punctuation symbol that could follow it, which would explode the number of representations the model has to learn. Taking punctuation into account, tokenizing our exemplary text would give: ["Don", "'", "t", "you", "love", "🤗", "Transformers", "?", "We", "sure", "do", "."] Better. However, it is disadvantageous, how the tokenization dealt with the word "Don't". "Don't" stands for "do not", so it would be better tokenized as ["Do", "n't"]. This is where things start getting complicated, and part of the reason each model has its own tokenizer type. Depending on the rules we apply for tokenizing a text, a different tokenized output is generated for the same text. A pretrained model only performs properly if you feed it an input that was tokenized with the same rules that were used to tokenize its training data. spaCy and Moses are two popular rule-based tokenizers. Applying them on our example, spaCy and Moses would output something like: ["Do", "n't", "you", "love", "🤗", "Transformers", "?", "We", "sure", "do", "."] As can be seen space and punctuation tokenization, as well as rule-based tokenization, is used here. Space and punctuation tokenization and rule-based tokenization are both examples of word tokenization, which is loosely defined as splitting sentences into words. While it’s the most intuitive way to split texts into smaller chunks, this tokenization method can lead to problems for massive text corpora. In this case, space and punctuation tokenization usually generates a very big vocabulary (the set of all unique words and tokens used). E.g., Transformer XL uses space and punctuation tokenization, resulting in a vocabulary size of 267,735! Such a big vocabulary size forces the model to have an enormous embedding matrix as the input and output layer, which causes both an increased memory and time complexity. In general, transformers models rarely have a vocabulary size greater than 50,000, especially if they are pretrained only on a single language. So if simple space and punctuation tokenization is unsatisfactory, why not simply tokenize on characters? While character tokenization is very simple and would greatly reduce memory and time complexity it makes it much harder for the model to learn meaningful input representations. E.g. learning a meaningful context-independent representation for the letter "t" is much harder than learning a context-independent representation for the word "today". Therefore, character tokenization is often accompanied by a loss of performance. So to get the best of both worlds, transformers models use a hybrid between word-level and character-level tokenization called subword tokenization. Subword tokenization Subword tokenization algorithms rely on the principle that frequently used words should not be split into smaller subwords, but rare words should be decomposed into meaningful subwords. For instance "annoyingly" might be considered a rare word and could be decomposed into "annoying" and "ly". Both "annoying" and "ly" as stand-alone subwords would appear more frequently while at the same time the meaning of "annoyingly" is kept by the composite meaning of "annoying" and "ly". This is especially useful in agglutinative languages such as Turkish, where you can form (almost) arbitrarily long complex words by stringing together subwords. Subword tokenization allows the model to have a reasonable vocabulary size while being able to learn meaningful context-independent representations. In addition, subword tokenization enables the model to process words it has never seen before, by decomposing them into known subwords. For instance, the BertTokenizer tokenizes "I have a new GPU!" as follows: >>> from transformers import BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> tokenizer.tokenize("I have a new GPU!") ["i", "have", "a", "new", "gp", "##u", "!"] Because we are considering the uncased model, the sentence was lowercased first. We can see that the words ["i", "have", "a", "new"] are present in the tokenizer’s vocabulary, but the word "gpu" is not. Consequently, the tokenizer splits "gpu" into known subwords: ["gp" and "##u"]. "##" means that the rest of the token should be attached to the previous one, without space (for decoding or reversal of the tokenization). As another example, XLNetTokenizer tokenizes our previously exemplary text as follows: >>> from transformers import XLNetTokenizer >>> tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased") >>> tokenizer.tokenize("Don't you love 🤗 Transformers? We sure do.") ["▁Don", "'", "t", "▁you", "▁love", "▁", "🤗", "▁", "Transform", "ers", "?", "▁We", "▁sure", "▁do", "."] We’ll get back to the meaning of those "▁" when we look at SentencePiece. As one can see, the rare word "Transformers" has been split into the more frequent subwords "Transform" and "ers". Let’s now look at how the different subword tokenization algorithms work. Note that all of those tokenization algorithms rely on some form of training which is usually done on the corpus the corresponding model will be trained on. Byte-Pair Encoding (BPE) Byte-Pair Encoding (BPE) was introduced in Neural Machine Translation of Rare Words with Subword Units (Sennrich et al., 2015). BPE relies on a pre-tokenizer that splits the training data into words. Pretokenization can be as simple as space tokenization, e.g. GPT-2, RoBERTa. More advanced pre-tokenization include rule-based tokenization, e.g. XLM, FlauBERT which uses Moses for most languages, or GPT which uses Spacy and ftfy, to count the frequency of each word in the training corpus. After pre-tokenization, a set of unique words has been created and the frequency with which each word occurred in the training data has been determined. Next, BPE creates a base vocabulary consisting of all symbols that occur in the set of unique words and learns merge rules to form a new symbol from two symbols of the base vocabulary. It does so until the vocabulary has attained the desired vocabulary size. Note that the desired vocabulary size is a hyperparameter to define before training the tokenizer. As an example, let’s assume that after pre-tokenization, the following set of words including their frequency has been determined: ("hug", 10), ("pug", 5), ("pun", 12), ("bun", 4), ("hugs", 5) Consequently, the base vocabulary is ["b", "g", "h", "n", "p", "s", "u"]. Splitting all words into symbols of the base vocabulary, we obtain: ("h" "u" "g", 10), ("p" "u" "g", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "u" "g" "s", 5) BPE then counts the frequency of each possible symbol pair and picks the symbol pair that occurs most frequently. In the example above "h" followed by "u" is present 10 + 5 = 15 times (10 times in the 10 occurrences of "hug", 5 times in the 5 occurrences of "hugs"). However, the most frequent symbol pair is "u" followed by "g", occurring 10 + 5 + 5 = 20 times in total. Thus, the first merge rule the tokenizer learns is to group all "u" symbols followed by a "g" symbol together. Next, "ug" is added to the vocabulary. The set of words then becomes ("h" "ug", 10), ("p" "ug", 5), ("p" "u" "n", 12), ("b" "u" "n", 4), ("h" "ug" "s", 5) BPE then identifies the next most common symbol pair. It’s "u" followed by "n", which occurs 16 times. "u", "n" is merged to "un" and added to the vocabulary. The next most frequent symbol pair is "h" followed by "ug", occurring 15 times. Again the pair is merged and "hug" can be added to the vocabulary. At this stage, the vocabulary is ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"] and our set of unique words is represented as ("hug", 10), ("p" "ug", 5), ("p" "un", 12), ("b" "un", 4), ("hug" "s", 5) Assuming, that the Byte-Pair Encoding training would stop at this point, the learned merge rules would then be applied to new words (as long as those new words do not include symbols that were not in the base vocabulary). For instance, the word "bug" would be tokenized to ["b", "ug"] but "mug" would be tokenized as ["<unk>", "ug"] since the symbol "m" is not in the base vocabulary. In general, single letters such as "m" are not replaced by the "<unk>" symbol because the training data usually includes at least one occurrence of each letter, but it is likely to happen for very special characters like emojis. As mentioned earlier, the vocabulary size, i.e. the base vocabulary size + the number of merges, is a hyperparameter to choose. For instance GPT has a vocabulary size of 40,478 since they have 478 base characters and chose to stop training after 40,000 merges. Byte-level BPE A base vocabulary that includes all possible base characters can be quite large if e.g. all unicode characters are considered as base characters. To have a better base vocabulary, GPT-2 uses bytes as the base vocabulary, which is a clever trick to force the base vocabulary to be of size 256 while ensuring that every base character is included in the vocabulary. With some additional rules to deal with punctuation, the GPT2’s tokenizer can tokenize every text without the need for the <unk> symbol. GPT-2 has a vocabulary size of 50,257, which corresponds to the 256 bytes base tokens, a special end-of-text token and the symbols learned with 50,000 merges. WordPiece WordPiece is the subword tokenization algorithm used for BERT, DistilBERT, and Electra. The algorithm was outlined in Japanese and Korean Voice Search (Schuster et al., 2012) and is very similar to BPE. WordPiece first initializes the vocabulary to include every character present in the training data and progressively learns a given number of merge rules. In contrast to BPE, WordPiece does not choose the most frequent symbol pair, but the one that maximizes the likelihood of the training data once added to the vocabulary. So what does this mean exactly? Referring to the previous example, maximizing the likelihood of the training data is equivalent to finding the symbol pair, whose probability divided by the probabilities of its first symbol followed by its second symbol is the greatest among all symbol pairs. E.g. "u", followed by "g" would have only been merged if the probability of "ug" divided by "u", "g" would have been greater than for any other symbol pair. Intuitively, WordPiece is slightly different to BPE in that it evaluates what it loses by merging two symbols to ensure it’s worth it. Unigram Unigram is a subword tokenization algorithm introduced in Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates (Kudo, 2018). In contrast to BPE or WordPiece, Unigram initializes its base vocabulary to a large number of symbols and progressively trims down each symbol to obtain a smaller vocabulary. The base vocabulary could for instance correspond to all pre-tokenized words and the most common substrings. Unigram is not used directly for any of the models in the transformers, but it’s used in conjunction with SentencePiece. At each training step, the Unigram algorithm defines a loss (often defined as the log-likelihood) over the training data given the current vocabulary and a unigram language model. Then, for each symbol in the vocabulary, the algorithm computes how much the overall loss would increase if the symbol was to be removed from the vocabulary. Unigram then removes p (with p usually being 10% or 20%) percent of the symbols whose loss increase is the lowest, i.e. those symbols that least affect the overall loss over the training data. This process is repeated until the vocabulary has reached the desired size. The Unigram algorithm always keeps the base characters so that any word can be tokenized. Because Unigram is not based on merge rules (in contrast to BPE and WordPiece), the algorithm has several ways of tokenizing new text after training. As an example, if a trained Unigram tokenizer exhibits the vocabulary: ["b", "g", "h", "n", "p", "s", "u", "ug", "un", "hug"], "hugs" could be tokenized both as ["hug", "s"], ["h", "ug", "s"] or ["h", "u", "g", "s"]. So which one to choose? Unigram saves the probability of each token in the training corpus on top of saving the vocabulary so that the probability of each possible tokenization can be computed after training. The algorithm simply picks the most likely tokenization in practice, but also offers the possibility to sample a possible tokenization according to their probabilities. Those probabilities are defined by the loss the tokenizer is trained on. Assuming that the training data consists of the words x1,…,xNx_{1}, \dots, x_{N} and that the set of all possible tokenizations for a word xix_{i} is defined as S(xi)S(x_{i}), then the overall loss is defined as L=−∑i=1Nlog⁡(∑x∈S(xi)p(x))\mathcal{L} = -\sum_{i=1}^{N} \log \left ( \sum_{x \in S(x_{i})} p(x) \right ) SentencePiece All tokenization algorithms described so far have the same problem: It is assumed that the input text uses spaces to separate words. However, not all languages use spaces to separate words. One possible solution is to use language specific pre-tokenizers, e.g. XLM uses a specific Chinese, Japanese, and Thai pre-tokenizer). To solve this problem more generally, SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018) treats the input as a raw input stream, thus including the space in the set of characters to use. It then uses the BPE or unigram algorithm to construct the appropriate vocabulary. The XLNetTokenizer uses SentencePiece for example, which is also why in the example earlier the "▁" character was included in the vocabulary. Decoding with SentencePiece is very easy since all tokens can just be concatenated and "▁" is replaced by a space. All transformers models in the library that use SentencePiece use it in combination with unigram. Examples of models using SentencePiece are ALBERT, XLNet, Marian, and T5.
https://huggingface.co/docs/transformers/attention
Attention mechanisms Most transformer models use full attention in the sense that the attention matrix is square. It can be a big computational bottleneck when you have long texts. Longformer and reformer are models that try to be more efficient and use a sparse version of the attention matrix to speed up training. LSH attention Reformer uses LSH attention. In the softmax(QK^t), only the biggest elements (in the softmax dimension) of the matrix QK^t are going to give useful contributions. So for each query q in Q, we can consider only the keys k in K that are close to q. A hash function is used to determine if q and k are close. The attention mask is modified to mask the current token (except at the first position), because it will give a query and a key equal (so very similar to each other). Since the hash can be a bit random, several hash functions are used in practice (determined by a n_rounds parameter) and then are averaged together. Local attention Longformer uses local attention: often, the local context (e.g., what are the two tokens to the left and right?) is enough to take action for a given token. Also, by stacking attention layers that have a small window, the last layer will have a receptive field of more than just the tokens in the window, allowing them to build a representation of the whole sentence. Some preselected input tokens are also given global attention: for those few tokens, the attention matrix can access all tokens and this process is symmetric: all other tokens have access to those specific tokens (on top of the ones in their local window). This is shown in Figure 2d of the paper, see below for a sample attention mask: Using those attention matrices with less parameters then allows the model to have inputs having a bigger sequence length. Other tricks Axial positional encodings Reformer uses axial positional encodings: in traditional transformer models, the positional encoding E is a matrix of size ll by dd, ll being the sequence length and dd the dimension of the hidden state. If you have very long texts, this matrix can be huge and take way too much space on the GPU. To alleviate that, axial positional encodings consist of factorizing that big matrix E in two smaller matrices E1 and E2, with dimensions l1×d1l_{1} \times d_{1} and l2×d2l_{2} \times d_{2}, such that l1×l2=ll_{1} \times l_{2} = l and d1+d2=dd_{1} + d_{2} = d (with the product for the lengths, this ends up being way smaller). The embedding for time step jj in E is obtained by concatenating the embeddings for timestep j%l1j \% l1 in E1 and j//l1j // l1 in E2.
https://huggingface.co/docs/transformers/model_memory_anatomy
Model training anatomy To understand performance optimization techniques that one can apply to improve efficiency of model training speed and memory utilization, it’s helpful to get familiar with how GPU is utilized during training, and how compute intensity varies depending on an operation performed. Let’s start by exploring a motivating example of GPU utilization and the training run of a model. For the demonstration, we’ll need to install a few libraries: pip install transformers datasets accelerate nvidia-ml-py3 The nvidia-ml-py3 library allows us to monitor the memory usage of the models from within Python. You might be familiar with the nvidia-smi command in the terminal - this library allows to access the same information in Python directly. Then, we create some dummy data: random token IDs between 100 and 30000 and binary labels for a classifier. In total, we get 512 sequences each with length 512 and store them in a Dataset with PyTorch format. >>> import numpy as np >>> from datasets import Dataset >>> seq_len, dataset_size = 512, 512 >>> dummy_data = { ... "input_ids": np.random.randint(100, 30000, (dataset_size, seq_len)), ... "labels": np.random.randint(0, 1, (dataset_size)), ... } >>> ds = Dataset.from_dict(dummy_data) >>> ds.set_format("pt") To print summary statistics for the GPU utilization and the training run with the Trainer we define two helper functions: >>> from pynvml import * >>> def print_gpu_utilization(): ... nvmlInit() ... handle = nvmlDeviceGetHandleByIndex(0) ... info = nvmlDeviceGetMemoryInfo(handle) ... print(f"GPU memory occupied: {info.used//1024**2} MB.") >>> def print_summary(result): ... print(f"Time: {result.metrics['train_runtime']:.2f}") ... print(f"Samples/second: {result.metrics['train_samples_per_second']:.2f}") ... print_gpu_utilization() Let’s verify that we start with a free GPU memory: >>> print_gpu_utilization() GPU memory occupied: 0 MB. That looks good: the GPU memory is not occupied as we would expect before we load any models. If that’s not the case on your machine make sure to stop all processes that are using GPU memory. However, not all free GPU memory can be used by the user. When a model is loaded to the GPU the kernels are also loaded, which can take up 1-2GB of memory. To see how much it is we load a tiny tensor into the GPU which triggers the kernels to be loaded as well. >>> import torch >>> torch.ones((1, 1)).to("cuda") >>> print_gpu_utilization() GPU memory occupied: 1343 MB. We see that the kernels alone take up 1.3GB of GPU memory. Now let’s see how much space the model uses. Load Model First, we load the bert-large-uncased model. We load the model weights directly to the GPU so that we can check how much space just the weights use. >>> from transformers import AutoModelForSequenceClassification >>> model = AutoModelForSequenceClassification.from_pretrained("bert-large-uncased").to("cuda") >>> print_gpu_utilization() GPU memory occupied: 2631 MB. We can see that the model weights alone take up 1.3 GB of GPU memory. The exact number depends on the specific GPU you are using. Note that on newer GPUs a model can sometimes take up more space since the weights are loaded in an optimized fashion that speeds up the usage of the model. Now we can also quickly check if we get the same result as with nvidia-smi CLI: Tue Jan 11 08:58:05 2022 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:04.0 Off | 0 | | N/A 37C P0 39W / 300W | 2631MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 3721 C ...nvs/codeparrot/bin/python 2629MiB | +-----------------------------------------------------------------------------+ We get the same number as before and you can also see that we are using a V100 GPU with 16GB of memory. So now we can start training the model and see how the GPU memory consumption changes. First, we set up a few standard training arguments: default_args = { "output_dir": "tmp", "evaluation_strategy": "steps", "num_train_epochs": 1, "log_level": "error", "report_to": "none", } If you plan to run multiple experiments, in order to properly clear the memory between experiments, restart the Python kernel between experiments. Memory utilization at vanilla training Let’s use the Trainer and train the model without using any GPU performance optimization techniques and a batch size of 4: >>> from transformers import TrainingArguments, Trainer, logging >>> logging.set_verbosity_error() >>> training_args = TrainingArguments(per_device_train_batch_size=4, **default_args) >>> trainer = Trainer(model=model, args=training_args, train_dataset=ds) >>> result = trainer.train() >>> print_summary(result) Time: 57.82 Samples/second: 8.86 GPU memory occupied: 14949 MB. We see that already a relatively small batch size almost fills up our GPU’s entire memory. However, a larger batch size can often result in faster model convergence or better end performance. So ideally we want to tune the batch size to our model’s needs and not to the GPU limitations. What’s interesting is that we use much more memory than the size of the model. To understand a bit better why this is the case let’s have a look at a model’s operations and memory needs. Anatomy of Model's Operations Transformers architecture includes 3 main groups of operations grouped below by compute-intensity. Tensor Contractions Linear layers and components of Multi-Head Attention all do batched matrix-matrix multiplications. These operations are the most compute-intensive part of training a transformer. Statistical Normalizations Softmax and layer normalization are less compute-intensive than tensor contractions, and involve one or more reduction operations, the result of which is then applied via a map. Element-wise Operators These are the remaining operators: biases, dropout, activations, and residual connections. These are the least compute-intensive operations. This knowledge can be helpful to know when analyzing performance bottlenecks. This summary is derived from Data Movement Is All You Need: A Case Study on Optimizing Transformers 2020 Anatomy of Model's Memory We’ve seen that training the model uses much more memory than just putting the model on the GPU. This is because there are many components during training that use GPU memory. The components on GPU memory are the following: model weights optimizer states gradients forward activations saved for gradient computation temporary buffers functionality-specific memory A typical model trained in mixed precision with AdamW requires 18 bytes per model parameter plus activation memory. For inference there are no optimizer states and gradients, so we can subtract those. And thus we end up with 6 bytes per model parameter for mixed precision inference, plus activation memory. Let’s look at the details. Model Weights: 4 bytes * number of parameters for fp32 training 6 bytes * number of parameters for mixed precision training (maintains a model in fp32 and one in fp16 in memory) Optimizer States: 8 bytes * number of parameters for normal AdamW (maintains 2 states) 2 bytes * number of parameters for 8-bit AdamW optimizers like bitsandbytes 4 bytes * number of parameters for optimizers like SGD with momentum (maintains only 1 state) Gradients 4 bytes * number of parameters for either fp32 or mixed precision training (gradients are always kept in fp32) Forward Activations size depends on many factors, the key ones being sequence length, hidden size and batch size. There are the input and output that are being passed and returned by the forward and the backward functions and the forward activations saved for gradient computation. Temporary Memory Additionally, there are all kinds of temporary variables which get released once the calculation is done, but in the moment these could require additional memory and could push to OOM. Therefore, when coding it’s crucial to think strategically about such temporary variables and sometimes to explicitly free those as soon as they are no longer needed. Functionality-specific memory Then, your software could have special memory needs. For example, when generating text using beam search, the software needs to maintain multiple copies of inputs and outputs. forward vs backward Execution Speed For convolutions and linear layers there are 2x flops in the backward compared to the forward, which generally translates into ~2x slower (sometimes more, because sizes in the backward tend to be more awkward). Activations are usually bandwidth-limited, and it’s typical for an activation to have to read more data in the backward than in the forward (e.g. activation forward reads once, writes once, activation backward reads twice, gradOutput and output of the forward, and writes once, gradInput). As you can see, there are potentially a few places where we could save GPU memory or speed up operations. Now that you understand what affects GPU utilization and computation speed, refer to the Methods and tools for efficient training on a single GPU documentation page to learn about performance optimization techniques.
https://huggingface.co/docs/transformers/pipeline_webserver
Using pipelines for a webserver Creating an inference engine is a complex topic, and the "best" solution will most likely depend on your problem space. Are you on CPU or GPU? Do you want the lowest latency, the highest throughput, support for many models, or just highly optimize 1 specific model? There are many ways to tackle this topic, so what we are going to present is a good default to get started which may not necessarily be the most optimal solution for you. The key thing to understand is that we can use an iterator, just like you would on a dataset, since a webserver is basically a system that waits for requests and treats them as they come in. Usually webservers are multiplexed (multithreaded, async, etc..) to handle various requests concurrently. Pipelines on the other hand (and mostly the underlying models) are not really great for parallelism; they take up a lot of RAM, so it’s best to give them all the available resources when they are running or it’s a compute-intensive job. We are going to solve that by having the webserver handle the light load of receiving and sending requests, and having a single thread handling the actual work. This example is going to use starlette. The actual framework is not really important, but you might have to tune or change the code if you are using another one to achieve the same effect. Create server.py: from starlette.applications import Starlette from starlette.responses import JSONResponse from starlette.routing import Route from transformers import pipeline import asyncio async def homepage(request): payload = await request.body() string = payload.decode("utf-8") response_q = asyncio.Queue() await request.app.model_queue.put((string, response_q)) output = await response_q.get() return JSONResponse(output) async def server_loop(q): pipe = pipeline(model="bert-base-uncased") while True: (string, response_q) = await q.get() out = pipe(string) await response_q.put(out) app = Starlette( routes=[ Route("/", homepage, methods=["POST"]), ], ) @app.on_event("startup") async def startup_event(): q = asyncio.Queue() app.model_queue = q asyncio.create_task(server_loop(q)) Now you can start it with: And you can query it: curl -X POST -d "test [MASK]" http://localhost:8000/ And there you go, now you have a good idea of how to create a webserver! What is really important is that we load the model only once, so there are no copies of the model on the webserver. This way, no unnecessary RAM is being used. Then the queuing mechanism allows you to do fancy stuff like maybe accumulating a few items before inferring to use dynamic batching: The code sample below is intentionally written like pseudo-code for readability. Do not run this without checking if it makes sense for your system resources! (string, rq) = await q.get() strings = [] queues = [] while True: try: (string, rq) = await asyncio.wait_for(q.get(), timeout=0.001) except asyncio.exceptions.TimeoutError: break strings.append(string) queues.append(rq) strings outs = pipe(strings, batch_size=len(strings)) for rq, out in zip(queues, outs): await rq.put(out) Again, the proposed code is optimized for readability, not for being the best code. First of all, there’s no batch size limit which is usually not a great idea. Next, the timeout is reset on every queue fetch, meaning you could wait much more than 1ms before running the inference (delaying the first request by that much). It would be better to have a single 1ms deadline. This will always wait for 1ms even if the queue is empty, which might not be the best since you probably want to start doing inference if there’s nothing in the queue. But maybe it does make sense if batching is really crucial for your use case. Again, there’s really no one best solution. Few things you might want to consider Error checking There’s a lot that can go wrong in production: out of memory, out of space, loading the model might fail, the query might be wrong, the query might be correct but still fail to run because of a model misconfiguration, and so on. Generally, it’s good if the server outputs the errors to the user, so adding a lot of try..except statements to show those errors is a good idea. But keep in mind it may also be a security risk to reveal all those errors depending on your security context. Circuit breaking Webservers usually look better when they do circuit breaking. It means they return proper errors when they’re overloaded instead of just waiting for the query indefinitely. Return a 503 error instead of waiting for a super long time or a 504 after a long time. This is relatively easy to implement in the proposed code since there is a single queue. Looking at the queue size is a basic way to start returning errors before your webserver fails under load. Blocking the main thread Currently PyTorch is not async aware, and computation will block the main thread while running. That means it would be better if PyTorch was forced to run on its own thread/process. This wasn’t done here because the code is a lot more complex (mostly because threads and async and queues don’t play nice together). But ultimately it does the same thing. This would be important if the inference of single items were long (> 1s) because in this case, it means every query during inference would have to wait for 1s before even receiving an error. Dynamic batching In general, batching is not necessarily an improvement over passing 1 item at a time (see batching details for more information). But it can be very effective when used in the correct setting. In the API, there is no dynamic batching by default (too much opportunity for a slowdown). But for BLOOM inference - which is a very large model - dynamic batching is essential to provide a decent experience for everyone.
https://huggingface.co/docs/transformers/pad_truncation
Padding and truncation Batched inputs are often different lengths, so they can’t be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special padding token to ensure shorter sequences will have the same length as either the longest sequence in a batch or the maximum length accepted by the model. Truncation works in the other direction by truncating long sequences. In most cases, padding your batch to the length of the longest sequence and truncating to the maximum length a model can accept works pretty well. However, the API supports more strategies if you need them. The three arguments you need to are: padding, truncation and max_length. The padding argument controls padding. It can be a boolean or a string: True or 'longest': pad to the longest sequence in the batch (no padding is applied if you only provide a single sequence). 'max_length': pad to a length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). Padding will still be applied if you only provide a single sequence. False or 'do_not_pad': no padding is applied. This is the default behavior. The truncation argument controls truncation. It can be a boolean or a string: True or 'longest_first': truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will truncate token by token, removing a token from the longest sequence in the pair until the proper length is reached. 'only_second': truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will only truncate the second sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. 'only_first': truncate to a maximum length specified by the max_length argument or the maximum length accepted by the model if no max_length is provided (max_length=None). This will only truncate the first sentence of a pair if a pair of sequences (or a batch of pairs of sequences) is provided. False or 'do_not_truncate': no truncation is applied. This is the default behavior. The max_length argument controls the length of the padding and truncation. It can be an integer or None, in which case it will default to the maximum length the model can accept. If the model has no specific maximum input length, truncation or padding to max_length is deactivated. The following table summarizes the recommended way to setup padding and truncation. If you use pairs of input sequences in any of the following examples, you can replace truncation=True by a STRATEGY selected in ['only_first', 'only_second', 'longest_first'], i.e. truncation='only_second' or truncation='longest_first' to control how both sequences in the pair are truncated as detailed before. Truncation Padding Instruction no truncation no padding tokenizer(batch_sentences) padding to max sequence in batch tokenizer(batch_sentences, padding=True) or tokenizer(batch_sentences, padding='longest') padding to max model input length tokenizer(batch_sentences, padding='max_length') padding to specific length tokenizer(batch_sentences, padding='max_length', max_length=42) padding to a multiple of a value `tokenizer(batch_sentences, padding=True, pad_to_multiple_of=8) truncation to max model input length no padding tokenizer(batch_sentences, truncation=True) or tokenizer(batch_sentences, truncation=STRATEGY) padding to max sequence in batch tokenizer(batch_sentences, padding=True, truncation=True) or tokenizer(batch_sentences, padding=True, truncation=STRATEGY) padding to max model input length tokenizer(batch_sentences, padding='max_length', truncation=True) or tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY) padding to specific length Not possible truncation to specific length no padding tokenizer(batch_sentences, truncation=True, max_length=42) or tokenizer(batch_sentences, truncation=STRATEGY, max_length=42) padding to max sequence in batch tokenizer(batch_sentences, padding=True, truncation=True, max_length=42) or tokenizer(batch_sentences, padding=True, truncation=STRATEGY, max_length=42) padding to max model input length Not possible padding to specific length tokenizer(batch_sentences, padding='max_length', truncation=True, max_length=42) or tokenizer(batch_sentences, padding='max_length', truncation=STRATEGY, max_length=42)
https://huggingface.co/docs/transformers/bertology
BERTology There is a growing field of study concerned with investigating the inner working of large-scale transformers like BERT (that some call “BERTology”). Some good examples of this field are: BERT Rediscovers the Classical NLP Pipeline by Ian Tenney, Dipanjan Das, Ellie Pavlick: https://arxiv.org/abs/1905.05950 Are Sixteen Heads Really Better than One? by Paul Michel, Omer Levy, Graham Neubig: https://arxiv.org/abs/1905.10650 What Does BERT Look At? An Analysis of BERT’s Attention by Kevin Clark, Urvashi Khandelwal, Omer Levy, Christopher D. Manning: https://arxiv.org/abs/1906.04341 CAT-probing: A Metric-based Approach to Interpret How Pre-trained Models for Programming Language Attend Code Structure: https://arxiv.org/abs/2210.04633 In order to help this new field develop, we have included a few additional features in the BERT/GPT/GPT-2 models to help people access the inner representations, mainly adapted from the great work of Paul Michel (https://arxiv.org/abs/1905.10650): accessing all the hidden-states of BERT/GPT/GPT-2, accessing all the attention weights for each head of BERT/GPT/GPT-2, retrieving heads output values and gradients to be able to compute head importance score and prune head as explained in https://arxiv.org/abs/1905.10650. To help you understand and use these features, we have added a specific example script: bertology.py while extract information and prune a model pre-trained on GLUE.
https://huggingface.co/docs/transformers/perplexity
Perplexity of fixed-length models Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note that the metric applies specifically to classical language models (sometimes called autoregressive or causal language models) and is not well defined for masked language models like BERT (see summary of the models). Perplexity is defined as the exponentiated average negative log-likelihood of a sequence. If we have a tokenized sequence X=(x0,x1,…,xt)X = (x_0, x_1, \dots, x_t), then the perplexity of XX is, PPL(X)=exp⁡{−1t∑itlog⁡pθ(xi∣x<i)}\text{PPL}(X) = \exp \left\{ {-\frac{1}{t}\sum_i^t \log p_\theta (x_i|x_{<i}) } \right\} where log⁡pθ(xi∣x<i)\log p_\theta (x_i|x_{<i}) is the log-likelihood of the ith token conditioned on the preceding tokens x<ix_{<i} according to our model. Intuitively, it can be thought of as an evaluation of the model’s ability to predict uniformly among the set of specified tokens in a corpus. Importantly, this means that the tokenization procedure has a direct impact on a model’s perplexity which should always be taken into consideration when comparing different models. This is also equivalent to the exponentiation of the cross-entropy between the data and model predictions. For more intuition about perplexity and its relationship to Bits Per Character (BPC) and data compression, check out this fantastic blog post on The Gradient. Calculating PPL with fixed-length models If we weren’t limited by a model’s context size, we would evaluate the model’s perplexity by autoregressively factorizing a sequence and conditioning on the entire preceding subsequence at each step, as shown below. When working with approximate models, however, we typically have a constraint on the number of tokens the model can process. The largest version of GPT-2, for example, has a fixed length of 1024 tokens, so we cannot calculate pθ(xt∣x<t)p_\theta(x_t|x_{<t}) directly when tt is greater than 1024. Instead, the sequence is typically broken into subsequences equal to the model’s maximum input size. If a model’s max input size is kk, we then approximate the likelihood of a token xtx_t by conditioning only on the k−1k-1 tokens that precede it rather than the entire context. When evaluating the model’s perplexity of a sequence, a tempting but suboptimal approach is to break the sequence into disjoint chunks and add up the decomposed log-likelihoods of each segment independently. This is quick to compute since the perplexity of each segment can be computed in one forward pass, but serves as a poor approximation of the fully-factorized perplexity and will typically yield a higher (worse) PPL because the model will have less context at most of the prediction steps. Instead, the PPL of fixed-length models should be evaluated with a sliding-window strategy. This involves repeatedly sliding the context window so that the model has more context when making each prediction. This is a closer approximation to the true decomposition of the sequence probability and will typically yield a more favorable score. The downside is that it requires a separate forward pass for each token in the corpus. A good practical compromise is to employ a strided sliding window, moving the context by larger strides rather than sliding by 1 token a time. This allows computation to proceed much faster while still giving the model a large context to make predictions at each step. Example: Calculating perplexity with GPT-2 in 🤗 Transformers Let’s demonstrate this process with GPT-2. from transformers import GPT2LMHeadModel, GPT2TokenizerFast device = "cuda" model_id = "gpt2-large" model = GPT2LMHeadModel.from_pretrained(model_id).to(device) tokenizer = GPT2TokenizerFast.from_pretrained(model_id) We’ll load in the WikiText-2 dataset and evaluate the perplexity using a few different sliding-window strategies. Since this dataset is small and we’re just doing one forward pass over the set, we can just load and encode the entire dataset in memory. from datasets import load_dataset test = load_dataset("wikitext", "wikitext-2-raw-v1", split="test") encodings = tokenizer("\n\n".join(test["text"]), return_tensors="pt") With 🤗 Transformers, we can simply pass the input_ids as the labels to our model, and the average negative log-likelihood for each token is returned as the loss. With our sliding window approach, however, there is overlap in the tokens we pass to the model at each iteration. We don’t want the log-likelihood for the tokens we’re just treating as context to be included in our loss, so we can set these targets to -100 so that they are ignored. The following is an example of how we could do this with a stride of 512. This means that the model will have at least 512 tokens for context when calculating the conditional likelihood of any one token (provided there are 512 preceding tokens available to condition on). import torch from tqdm import tqdm max_length = model.config.n_positions stride = 512 seq_len = encodings.input_ids.size(1) nlls = [] prev_end_loc = 0 for begin_loc in tqdm(range(0, seq_len, stride)): end_loc = min(begin_loc + max_length, seq_len) trg_len = end_loc - prev_end_loc input_ids = encodings.input_ids[:, begin_loc:end_loc].to(device) target_ids = input_ids.clone() target_ids[:, :-trg_len] = -100 with torch.no_grad(): outputs = model(input_ids, labels=target_ids) neg_log_likelihood = outputs.loss nlls.append(neg_log_likelihood) prev_end_loc = end_loc if end_loc == seq_len: break ppl = torch.exp(torch.stack(nlls).mean()) Running this with the stride length equal to the max input length is equivalent to the suboptimal, non-sliding-window strategy we discussed above. The smaller the stride, the more context the model will have in making each prediction, and the better the reported perplexity will typically be. When we run the above with stride = 1024, i.e. no overlap, the resulting PPL is 19.44, which is about the same as the 19.93 reported in the GPT-2 paper. By using stride = 512 and thereby employing our striding window strategy, this jumps down to 16.45. This is not only a more favorable score, but is calculated in a way that is closer to the true autoregressive decomposition of a sequence likelihood.
https://huggingface.co/docs/transformers/main_classes/callback
Callbacks Callbacks are objects that can customize the behavior of the training loop in the PyTorch Trainer (this feature is not yet implemented in TensorFlow) that can inspect the training loop state (for progress reporting, logging on TensorBoard or other ML platforms…) and take decisions (like early stopping). Callbacks are “read only” pieces of code, apart from the TrainerControl object they return, they cannot change anything in the training loop. For customizations that require changes in the training loop, you should subclass Trainer and override the methods you need (see trainer for examples). By default a Trainer will use the following callbacks: DefaultFlowCallback which handles the default behavior for logging, saving and evaluation. PrinterCallback or ProgressCallback to display progress and print the logs (the first one is used if you deactivate tqdm through the TrainingArguments, otherwise it’s the second one). TensorBoardCallback if tensorboard is accessible (either through PyTorch >= 1.4 or tensorboardX). WandbCallback if wandb is installed. CometCallback if comet_ml is installed. MLflowCallback if mlflow is installed. NeptuneCallback if neptune is installed. AzureMLCallback if azureml-sdk is installed. CodeCarbonCallback if codecarbon is installed. ClearMLCallback if clearml is installed. DagsHubCallback if dagshub is installed. FlyteCallback if flyte is installed. The main class that implements callbacks is TrainerCallback. It gets the TrainingArguments used to instantiate the Trainer, can access that Trainer’s internal state via TrainerState, and can take some actions on the training loop via TrainerControl. Available Callbacks Here is the list of the available TrainerCallback in the library: class transformers.integrations.CometCallback < source > ( ) A TrainerCallback that sends the logs to Comet ML. Setup the optional Comet.ml integration. Environment: COMET_MODE (str, optional, defaults to ONLINE): Whether to create an online, offline experiment or disable Comet logging. Can be OFFLINE, ONLINE, or DISABLED. COMET_PROJECT_NAME (str, optional): Comet project name for experiments. COMET_OFFLINE_DIRECTORY (str, optional): Folder to use for saving offline experiments when COMET_MODE is OFFLINE. COMET_LOG_ASSETS (str, optional, defaults to TRUE): Whether or not to log training assets (tf event logs, checkpoints, etc), to Comet. Can be TRUE, or FALSE. For a number of configurable items in the environment, see here. class transformers.DefaultFlowCallback < source > ( ) A TrainerCallback that handles the default flow of the training loop for logs, evaluation and checkpoints. class transformers.EarlyStoppingCallback < source > ( early_stopping_patience: int = 1 early_stopping_threshold: typing.Optional[float] = 0.0 ) Parameters early_stopping_patience (int) — Use with metric_for_best_model to stop training when the specified metric worsens for early_stopping_patience evaluation calls. early_stopping_threshold(float, optional) — Use with TrainingArguments metric_for_best_model and early_stopping_patience to denote how much the specified metric must improve to satisfy early stopping conditions. ` A TrainerCallback that handles early stopping. This callback depends on TrainingArguments argument load_best_model_at_end functionality to set best_metric in TrainerState. Note that if the TrainingArguments argument save_steps differs from eval_steps, the early stopping will not occur until the next save step. class transformers.integrations.TensorBoardCallback < source > ( tb_writer = None ) Parameters tb_writer (SummaryWriter, optional) — The writer to use. Will instantiate one if not set. A TrainerCallback that sends the logs to TensorBoard. class transformers.integrations.WandbCallback < source > ( ) A TrainerCallback that logs metrics, media, model checkpoints to Weight and Biases. Setup the optional Weights & Biases (wandb) integration. One can subclass and override this method to customize the setup if needed. Find more information here. You can also override the following environment variables: Environment: WANDB_LOG_MODEL (str, optional, defaults to "false"): Whether to log model and checkpoints during training. Can be "end", "checkpoint" or "false". If set to "end", the model will be uploaded at the end of training. If set to "checkpoint", the checkpoint will be uploaded every args.save_steps . If set to "false", the model will not be uploaded. Use along with load_best_model_at_end() to upload best model. Deprecated in 5.0 Setting WANDB_LOG_MODEL as bool will be deprecated in version 5 of 🤗 Transformers. WANDB_WATCH (str, optional defaults to "false"): Can be "gradients", "all", "parameters", or "false". Set to "all" to log gradients and parameters. WANDB_PROJECT (str, optional, defaults to "huggingface"): Set this to a custom string to store results in a different project. WANDB_DISABLED (bool, optional, defaults to False): Whether to disable wandb entirely. Set WANDB_DISABLED=true to disable. class transformers.integrations.MLflowCallback < source > ( ) A TrainerCallback that sends the logs to MLflow. Can be disabled by setting environment variable DISABLE_MLFLOW_INTEGRATION = TRUE. Setup the optional MLflow integration. Environment: HF_MLFLOW_LOG_ARTIFACTS (str, optional): Whether to use MLflow .log_artifact() facility to log artifacts. This only makes sense if logging to a remote server, e.g. s3 or GCS. If set to True or 1, will copy each saved checkpoint on each save in TrainingArguments’s output_dir to the local or remote artifact storage. Using it without a remote storage will just copy the files to your artifact location. MLFLOW_EXPERIMENT_NAME (str, optional, defaults to None): Whether to use an MLflow experiment_name under which to launch the run. Default to None which will point to the Default experiment in MLflow. Otherwise, it is a case sensitive name of the experiment to be activated. If an experiment with this name does not exist, a new experiment with this name is created. MLFLOW_TAGS (str, optional): A string dump of a dictionary of key/value pair to be added to the MLflow run as tags. Example: os.environ['MLFLOW_TAGS']='{"release.candidate": "RC1", "release.version": "2.2.0"}'. MLFLOW_NESTED_RUN (str, optional): Whether to use MLflow nested runs. If set to True or 1, will create a nested run inside the current run. MLFLOW_RUN_ID (str, optional): Allow to reattach to an existing run which can be usefull when resuming training from a checkpoint. When MLFLOW_RUN_ID environment variable is set, start_run attempts to resume a run with the specified run ID and other parameters are ignored. MLFLOW_FLATTEN_PARAMS (str, optional, defaults to False): Whether to flatten the parameters dictionary before logging. class transformers.integrations.NeptuneCallback < source > ( api_token: typing.Optional[str] = None project: typing.Optional[str] = None name: typing.Optional[str] = None base_namespace: str = 'finetuning' run = None log_parameters: bool = True log_checkpoints: typing.Optional[str] = None **neptune_run_kwargs ) Parameters api_token (str, optional) — Neptune API token obtained upon registration. You can leave this argument out if you have saved your token to the NEPTUNE_API_TOKEN environment variable (strongly recommended). See full setup instructions in the docs. project (str, optional) — Name of an existing Neptune project, in the form “workspace-name/project-name”. You can find and copy the name in Neptune from the project settings -> Properties. If None (default), the value of the NEPTUNE_PROJECT environment variable is used. name (str, optional) — Custom name for the run. base_namespace (str, optional, defaults to “finetuning”) — In the Neptune run, the root namespace that will contain all of the metadata logged by the callback. log_parameters (bool, optional, defaults to True) — If True, logs all Trainer arguments and model parameters provided by the Trainer. log_checkpoints (str, optional) — If “same”, uploads checkpoints whenever they are saved by the Trainer. If “last”, uploads only the most recently saved checkpoint. If “best”, uploads the best checkpoint (among the ones saved by the Trainer). If None, does not upload checkpoints. run (Run, optional) — Pass a Neptune run object if you want to continue logging to an existing run. Read more about resuming runs in the docs. **neptune_run_kwargs (optional) — Additional keyword arguments to be passed directly to the neptune.init_run() function when a new run is created. TrainerCallback that sends the logs to Neptune. For instructions and examples, see the Transformers integration guide in the Neptune documentation. class transformers.integrations.ClearMLCallback < source > ( ) A TrainerCallback that sends the logs to ClearML. Environment: CLEARML_PROJECT (str, optional, defaults to HuggingFace Transformers): ClearML project name. CLEARML_TASK (str, optional, defaults to Trainer): ClearML task name. CLEARML_LOG_MODEL (bool, optional, defaults to False): Whether to log models as artifacts during training. class transformers.integrations.DagsHubCallback < source > ( ) A TrainerCallback that logs to DagsHub. Extends MLflowCallback Setup the DagsHub’s Logging integration. Environment: HF_DAGSHUB_LOG_ARTIFACTS (str, optional): Whether to save the data and model artifacts for the experiment. Default to False. class transformers.integrations.FlyteCallback < source > ( save_log_history: bool = True sync_checkpoints: bool = True ) Parameters save_log_history (bool, optional, defaults to True) — When set to True, the training logs are saved as a Flyte Deck. sync_checkpoints (bool, optional, defaults to True) — When set to True, checkpoints are synced with Flyte and can be used to resume training in the case of an interruption. A TrainerCallback that sends the logs to Flyte. NOTE: This callback only works within a Flyte task. Example: from flytekit import current_context, task @task def train_hf_transformer(): cp = current_context().checkpoint trainer = Trainer(..., callbacks=[FlyteCallback()]) output = trainer.train(resume_from_checkpoint=cp.restore()) TrainerCallback class transformers.TrainerCallback < source > ( ) Parameters args (TrainingArguments) — The training arguments used to instantiate the Trainer. state (TrainerState) — The current state of the Trainer. control (TrainerControl) — The object that is returned to the Trainer and can be used to make some decisions. model (PreTrainedModel or torch.nn.Module) — The model being trained. tokenizer (PreTrainedTokenizer) — The tokenizer used for encoding the data. optimizer (torch.optim.Optimizer) — The optimizer used for the training steps. lr_scheduler (torch.optim.lr_scheduler.LambdaLR) — The scheduler used for setting the learning rate. train_dataloader (torch.utils.data.DataLoader, optional) — The current dataloader used for training. eval_dataloader (torch.utils.data.DataLoader, optional) — The current dataloader used for training. metrics (Dict[str, float]) — The metrics computed by the last evaluation phase. Those are only accessible in the event on_evaluate. logs (Dict[str, float]) — The values to log. Those are only accessible in the event on_log. A class for objects that will inspect the state of the training loop at some events and take some decisions. At each of those events the following arguments are available: The control object is the only one that can be changed by the callback, in which case the event that changes it should return the modified version. The argument args, state and control are positionals for all events, all the others are grouped in kwargs. You can unpack the ones you need in the signature of the event using them. As an example, see the code of the simple ~transformer.PrinterCallback. Example: class PrinterCallback(TrainerCallback): def on_log(self, args, state, control, logs=None, **kwargs): _ = logs.pop("total_flos", None) if state.is_local_process_zero: print(logs) on_epoch_begin < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the beginning of an epoch. on_epoch_end < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the end of an epoch. on_evaluate < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called after an evaluation phase. on_init_end < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the end of the initialization of the Trainer. on_log < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called after logging the last logs. on_predict < source > ( args: TrainingArguments state: TrainerState control: TrainerControl metrics **kwargs ) Event called after a successful prediction. on_prediction_step < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called after a prediction step. on_save < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called after a checkpoint save. on_step_begin < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the beginning of a training step. If using gradient accumulation, one training step might take several inputs. on_step_end < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the end of a training step. If using gradient accumulation, one training step might take several inputs. on_substep_end < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the end of an substep during gradient accumulation. on_train_begin < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the beginning of training. on_train_end < source > ( args: TrainingArguments state: TrainerState control: TrainerControl **kwargs ) Event called at the end of training. Here is an example of how to register a custom callback with the PyTorch Trainer: class MyCallback(TrainerCallback): "A callback that prints a message at the beginning of training" def on_train_begin(self, args, state, control, **kwargs): print("Starting training") trainer = Trainer( model, args, train_dataset=train_dataset, eval_dataset=eval_dataset, callbacks=[MyCallback], ) Another way to register a callback is to call trainer.add_callback() as follows: trainer = Trainer(...) trainer.add_callback(MyCallback) trainer.add_callback(MyCallback()) TrainerState class transformers.TrainerState < source > ( epoch: typing.Optional[float] = None global_step: int = 0 max_steps: int = 0 logging_steps: int = 500 eval_steps: int = 500 save_steps: int = 500 num_train_epochs: int = 0 total_flos: float = 0 log_history: typing.List[typing.Dict[str, float]] = None best_metric: typing.Optional[float] = None best_model_checkpoint: typing.Optional[str] = None is_local_process_zero: bool = True is_world_process_zero: bool = True is_hyper_param_search: bool = False trial_name: str = None trial_params: typing.Dict[str, typing.Union[str, float, int, bool]] = None ) Parameters epoch (float, optional) — Only set during training, will represent the epoch the training is at (the decimal part being the percentage of the current epoch completed). global_step (int, optional, defaults to 0) — During training, represents the number of update steps completed. max_steps (int, optional, defaults to 0) — The number of update steps to do during the current training. logging_steps (int, optional, defaults to 500) — Log every X updates steps eval_steps (int, optional) — Run an evaluation every X steps. save_steps (int, optional, defaults to 500) — Save checkpoint every X updates steps. total_flos (float, optional, defaults to 0) — The total number of floating operations done by the model since the beginning of training (stored as floats to avoid overflow). log_history (List[Dict[str, float]], optional) — The list of logs done since the beginning of training. best_metric (float, optional) — When tracking the best model, the value of the best metric encountered so far. best_model_checkpoint (str, optional) — When tracking the best model, the value of the name of the checkpoint for the best model encountered so far. is_local_process_zero (bool, optional, defaults to True) — Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several machines) main process. is_world_process_zero (bool, optional, defaults to True) — Whether or not this process is the global main process (when training in a distributed fashion on several machines, this is only going to be True for one process). is_hyper_param_search (bool, optional, defaults to False) — Whether we are in the process of a hyper parameter search using Trainer.hyperparameter_search. This will impact the way data will be logged in TensorBoard. A class containing the Trainer inner state that will be saved along the model and optimizer when checkpointing and passed to the TrainerCallback. In all this class, one step is to be understood as one update step. When using gradient accumulation, one update step may require several forward and backward passes: if you use gradient_accumulation_steps=n, then one update step requires going through n batches. Create an instance from the content of json_path. Save the content of this instance in JSON format inside json_path. TrainerControl class transformers.TrainerControl < source > ( should_training_stop: bool = False should_epoch_stop: bool = False should_save: bool = False should_evaluate: bool = False should_log: bool = False ) Parameters should_training_stop (bool, optional, defaults to False) — Whether or not the training should be interrupted. If True, this variable will not be set back to False. The training will just stop. should_epoch_stop (bool, optional, defaults to False) — Whether or not the current epoch should be interrupted. If True, this variable will be set back to False at the beginning of the next epoch. should_save (bool, optional, defaults to False) — Whether or not the model should be saved at this step. If True, this variable will be set back to False at the beginning of the next step. should_evaluate (bool, optional, defaults to False) — Whether or not the model should be evaluated at this step. If True, this variable will be set back to False at the beginning of the next step. should_log (bool, optional, defaults to False) — Whether or not the logs should be reported at this step. If True, this variable will be set back to False at the beginning of the next step. A class that handles the Trainer control flow. This class is used by the TrainerCallback to activate some switches in the training loop.
https://huggingface.co/docs/transformers/main_classes/keras_callbacks
When training a Transformers model with Keras, there are some library-specific callbacks available to automate common tasks: class transformers.KerasMetricCallback < source > ( metric_fn: typing.Callable eval_dataset: typing.Union[tensorflow.python.data.ops.dataset_ops.DatasetV2, numpy.ndarray, tensorflow.python.framework.ops.Tensor, tuple, dict] output_cols: typing.Optional[typing.List[str]] = None label_cols: typing.Optional[typing.List[str]] = None batch_size: typing.Optional[int] = None predict_with_generate: bool = False use_xla_generation: bool = False generate_kwargs: typing.Optional[dict] = None ) Parameters metric_fn (Callable) — Metric function provided by the user. It will be called with two arguments - predictions and labels. These contain the model’s outputs and matching labels from the dataset. It should return a dict mapping metric names to numerical values. eval_dataset (tf.data.Dataset or dict or tuple or np.ndarray or tf.Tensor) — Validation data to be used to generate predictions for the metric_fn. output_cols (`List[str], optional) — A list of columns to be retained from the model output as the predictions. Defaults to all. label_cols (’List[str], optional’) — A list of columns to be retained from the input dataset as the labels. Will be autodetected if this is not supplied. batch_size (int, optional) — Batch size. Only used when the data is not a pre-batched tf.data.Dataset. predict_with_generate (bool, optional, defaults to False) — Whether we should use model.generate() to get outputs for the model. use_xla_generation (bool, optional, defaults to False) — If we’re generating, whether to compile model generation with XLA. This can massively increase the speed of generation (up to 100X speedup) but will require a new XLA compilation for each input shape. When using XLA generation, it’s a good idea to pad your inputs to the same size, or to use the pad_to_multiple_of argument in your tokenizer or DataCollator, which will reduce the number of unique input shapes and save a lot of compilation time. This option has no effect is predict_with_generate is False. generate_kwargs (dict, optional) — Keyword arguments to pass to model.generate() when generating. Has no effect if predict_with_generate is False. Callback to compute metrics at the end of every epoch. Unlike normal Keras metrics, these do not need to be compilable by TF. It is particularly useful for common NLP metrics like BLEU and ROUGE that require string operations or generation loops that cannot be compiled. Predictions (or generations) will be computed on the eval_dataset before being passed to the metric_fn in np.ndarray format. The metric_fn should compute metrics and return a dict mapping metric names to metric values. We provide an example of a suitable metric_fn that computes ROUGE scores for a summarization model below. Note that this example skips some post-processing for readability and simplicity, and should probably not be used as-is! from datasets import load_metric rouge_metric = load_metric("rouge") def rouge_fn(predictions, labels): decoded_predictions = tokenizer.batch_decode(predictions, skip_special_tokens=True) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) result = rouge_metric.compute(predictions=decoded_predictions, references=decoded_labels) return {key: value.mid.fmeasure * 100 for key, value in result.items()} The above function will return a dict containing values which will be logged like any other Keras metric: {'rouge1': 37.4199, 'rouge2': 13.9768, 'rougeL': 34.361, 'rougeLsum': 35.0781 class transformers.PushToHubCallback < source > ( output_dir: typing.Union[str, pathlib.Path] save_strategy: typing.Union[str, transformers.trainer_utils.IntervalStrategy] = 'epoch' save_steps: typing.Optional[int] = None tokenizer: typing.Optional[transformers.tokenization_utils_base.PreTrainedTokenizerBase] = None hub_model_id: typing.Optional[str] = None hub_token: typing.Optional[str] = None checkpoint: bool = False **model_card_args ) Parameters output_dir (str) — The output directory where the model predictions and checkpoints will be written and synced with the repository on the Hub. save_strategy (str or IntervalStrategy, optional, defaults to "epoch") — The checkpoint save strategy to adopt during training. Possible values are: "no": Save is done at the end of training. "epoch": Save is done at the end of each epoch. "steps": Save is done every save_steps save_steps (int, optional) — The number of steps between saves when using the “steps” save_strategy. tokenizer (PreTrainedTokenizerBase, optional) — The tokenizer used by the model. If supplied, will be uploaded to the repo alongside the weights. hub_model_id (str, optional) — The name of the repository to keep in sync with the local output_dir. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance "user_name/model", which allows you to push to an organization you are a member of with "organization_name/model". Will default to the name of output_dir. hub_token (str, optional) — The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with huggingface-cli login. checkpoint (bool, optional, defaults to False) — Whether to save full training checkpoints (including epoch and optimizer state) to allow training to be resumed. Only usable when save_strategy is "epoch". Callback that will save and push the model to the Hub regularly. By default, it pushes once per epoch, but this can be changed with the save_strategy argument. Pushed models can be accessed like any other model on the hub, such as with the from_pretrained method. from transformers.keras_callbacks import PushToHubCallback push_to_hub_callback = PushToHubCallback( output_dir="./model_save", tokenizer=tokenizer, hub_model_id="gpt5-7xlarge", ) model.fit(train_dataset, callbacks=[push_to_hub_callback])
https://huggingface.co/docs/transformers/main_classes/configuration
The base class PretrainedConfig implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace’s AWS S3 repository). Each derived config class implements model specific attributes. Common attributes present in all config classes are: hidden_size, num_attention_heads, and num_hidden_layers. Text models further implement: vocab_size. class transformers.PretrainedConfig < source > ( **kwargs ) Parameters name_or_path (str, optional, defaults to "") — Store the string that was passed to PreTrainedModel.from_pretrained() or TFPreTrainedModel.from_pretrained() as pretrained_model_name_or_path if the configuration was created with such a method. output_hidden_states (bool, optional, defaults to False) — Whether or not the model should return all hidden-states. output_attentions (bool, optional, defaults to False) — Whether or not the model should returns all attentions. return_dict (bool, optional, defaults to True) — Whether or not the model should return a ModelOutput instead of a plain tuple. is_encoder_decoder (bool, optional, defaults to False) — Whether the model is used as an encoder/decoder or not. is_decoder (bool, optional, defaults to False) — Whether the model is used as decoder or not (in which case it’s used as an encoder). cross_attention_hidden_size** (bool, optional) — The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder setting and the cross-attention hidden dimension differs from self.config.hidden_size. add_cross_attention (bool, optional, defaults to False) — Whether cross-attention layers should be added to the model. Note, this option is only relevant for models that can be used as decoder models within the EncoderDecoderModel class, which consists of all models in AUTO_MODELS_FOR_CAUSAL_LM. tie_encoder_decoder (bool, optional, defaults to False) — Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder and decoder model to have the exact same parameter names. prune_heads (Dict[int, List[int]], optional, defaults to {}) — Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of heads to prune in said layer. For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2. chunk_size_feed_forward (int, optional, defaults to 0) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?. Parameters for sequence generation max_length (int, optional, defaults to 20) — Maximum length that will be used by default in the generate method of the model. min_length (int, optional, defaults to 0) — Minimum length that will be used by default in the generate method of the model. do_sample (bool, optional, defaults to False) — Flag that will be used by default in the generate method of the model. Whether or not to use sampling ; use greedy decoding otherwise. early_stopping (bool, optional, defaults to False) — Flag that will be used by default in the generate method of the model. Whether to stop the beam search when at least num_beams sentences are finished per batch or not. num_beams (int, optional, defaults to 1) — Number of beams for beam search that will be used by default in the generate method of the model. 1 means no beam search. num_beam_groups (int, optional, defaults to 1) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams that will be used by default in the generate method of the model. 1 means no group beam search. diversity_penalty (float, optional, defaults to 0.0) — Value to control diversity for group beam search. that will be used by default in the generate method of the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs. temperature (float, optional, defaults to 1.0) — The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive. top_k (int, optional, defaults to 50) — Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in the generate method of the model. top_p (float, optional, defaults to 1) — Value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation. typical_p (float, optional, defaults to 1) — Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details. repetition_penalty (float, optional, defaults to 1) — Parameter for repetition penalty that will be used by default in the generate method of the model. 1.0 means no penalty. length_penalty (float, optional, defaults to 1) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. no_repeat_ngram_size (int, optional, defaults to 0) — Value that will be used by default in the — generate method of the model for no_repeat_ngram_size. If set to int > 0, all ngrams of that size can only occur once. encoder_no_repeat_ngram_size (int, optional, defaults to 0) — Value that will be used by — default in the generate method of the model for encoder_no_repeat_ngram_size. If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids. bad_words_ids (List[int], optional) — List of token ids that are not allowed to be generated that will be used by default in the generate method of the model. In order to get the tokens of the words that should not appear in the generated text, use tokenizer.encode(bad_word, add_prefix_space=True). num_return_sequences (int, optional, defaults to 1) — Number of independently computed returned sequences for each element in the batch that will be used by default in the generate method of the model. output_scores (bool, optional, defaults to False) — Whether the model should return the logits when used for generation. return_dict_in_generate (bool, optional, defaults to False) — Whether the model should return a ModelOutput instead of a torch.LongTensor. forced_bos_token_id (int, optional) — The id of the token to force as the first generated token after the decoder_start_token_id. Useful for multilingual models like mBART where the first generated token needs to be the target language token. forced_eos_token_id (int, optional) — The id of the token to force as the last generated token when max_length is reached. remove_invalid_values (bool, optional) — Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation. Parameters for fine-tuning tasks architectures (List[str], optional) — Model architectures that can be used with the model pretrained weights. finetuning_task (str, optional) — Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow or PyTorch) checkpoint. id2label (Dict[int, str], optional) — A map from index (for instance prediction index, or target index) to label. label2id (Dict[str, int], optional) — A map from label to index for the model. num_labels (int, optional) — Number of labels to use in the last layer added to the model, typically for a classification task. task_specific_params (Dict[str, Any], optional) — Additional keyword arguments to store for the current task. problem_type (str, optional) — Problem type for XxxForSequenceClassification models. Can be one of "regression", "single_label_classification" or "multi_label_classification". Parameters linked to the tokenizer tokenizer_class (str, optional) — The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default). prefix (str, optional) — A specific prompt that should be added at the beginning of each text before calling the model. bos_token_id (int, optional) — The id of the beginning-of-stream token. pad_token_id (int, optional) — The id of the padding token. eos_token_id (int, optional) — The id of the end-of-stream token. decoder_start_token_id (int, optional) — If an encoder-decoder model starts decoding with a different token than bos, the id of that token. sep_token_id (int, optional) — The id of the separation token. PyTorch specific parameters torchscript (bool, optional, defaults to False) — Whether or not the model should be used with Torchscript. tie_word_embeddings (bool, optional, defaults to True) — Whether the model’s input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer. torch_dtype (str, optional) — The dtype of the weights. This attribute can be used to initialize the model to a non-default dtype (which is normally float32) and thus allow for optimal storage allocation. For example, if the saved model is float16, ideally we want to load it back using the minimal amount of memory needed to load float16 weights. Since the config object is stored in plain text, this attribute contains just the floating type string without the torch. prefix. For example, for torch.float16 `torch_dtype is the "float16" string. This attribute is currently not being used during model loading time, but this may change in the future versions. But we can already start preparing for the future by saving the dtype with save_pretrained. TensorFlow specific parameters use_bfloat16 (bool, optional, defaults to False) — Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models). tf_legacy_loss (bool, optional, defaults to False) — Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers v5. Base class for all configuration classes. Handles a few parameters common to all models’ configurations as well as methods for loading/downloading/saving configurations. A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does not load the model weights. It only affects the model’s configuration. Class attributes (overridden by derived classes): model_type (str) — An identifier for the model type, serialized into the JSON file, and used to recreate the correct object in AutoConfig. is_composition (bool) — Whether the config class is composed of multiple sub-configs. In this case the config has to be initialized from two or more configs of type PretrainedConfig like: EncoderDecoderConfig or ~RagConfig. keys_to_ignore_at_inference (List[str]) — A list of keys to ignore by default when looking at dictionary outputs of the model during inference. attribute_map (Dict[str, str]) — A dict that maps model specific attribute names to the standardized naming of attributes. Common attributes (present in all subclasses): vocab_size (int) — The number of tokens in the vocabulary, which is also the first dimension of the embeddings matrix (this attribute may be missing for models that don’t have a text modality like ViT). hidden_size (int) — The hidden size of the model. num_attention_heads (int) — The number of attention heads used in the multi-head attention layers of the model. num_hidden_layers (int) — The number of blocks in the model. push_to_hub < source > ( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs ) Parameters repo_id (str) — The name of the repository you want to push your config to. It should contain your organization name when pushing to a given organization. use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise. commit_message (str, optional) — Message to commit while pushing. Will default to "Upload config". private (bool, optional) — Whether or not the repository created should be private. token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified. max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization. revision (str, optional) — Branch to push the uploaded files to. Upload the configuration file to the 🤗 Model Hub. Examples: from transformers import AutoConfig config = AutoConfig.from_pretrained("bert-base-cased") config.push_to_hub("my-finetuned-bert") config.push_to_hub("huggingface/my-finetuned-bert") dict_torch_dtype_to_str < source > ( d: typing.Dict[str, typing.Any] ) Checks whether the passed dictionary and its nested dicts have a torch_dtype key and if it’s not None, converts torch.dtype to a string of just the type. For example, torch.float32 get converted into “float32” string, which can then be stored in the json format. from_dict < source > ( config_dict: typing.Dict[str, typing.Any] **kwargs ) → PretrainedConfig Parameters config_dict (Dict[str, Any]) — Dictionary that will be used to instantiate the configuration object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the get_config_dict() method. kwargs (Dict[str, Any]) — Additional parameters from which to initialize the configuration object. The configuration object instantiated from those parameters. Instantiates a PretrainedConfig from a Python dictionary of parameters. from_json_file < source > ( json_file: typing.Union[str, os.PathLike] ) → PretrainedConfig Parameters json_file (str or os.PathLike) — Path to the JSON file containing the parameters. The configuration object instantiated from that JSON file. Instantiates a PretrainedConfig from the path to a JSON file of parameters. from_pretrained < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike] cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs ) → PretrainedConfig Parameters pretrained_model_name_or_path (str or os.PathLike) — This can be either: a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/. a path or url to a saved configuration JSON file, e.g., ./my_model_directory/configuration.json. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the configuration files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. To test a pull request you made on the Hub, you can pass `revision=“refs/pr/“. return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final configuration object. If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored. subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter. The configuration object instantiated from this pretrained model. Instantiate a PretrainedConfig (or a derived class) from a pretrained model configuration. Examples: config = BertConfig.from_pretrained( "bert-base-uncased" ) config = BertConfig.from_pretrained( "./test/saved_model/" ) config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json") config = BertConfig.from_pretrained("bert-base-uncased", output_attentions=True, foo=False) assert config.output_attentions == True config, unused_kwargs = BertConfig.from_pretrained( "bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True ) assert config.output_attentions == True assert unused_kwargs == {"foo": False} get_config_dict < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike] **kwargs ) → Tuple[Dict, Dict] Parameters pretrained_model_name_or_path (str or os.PathLike) — The identifier of the pre-trained checkpoint from which we want the dictionary of parameters. Returns Tuple[Dict, Dict] The dictionary(ies) that will be used to instantiate the configuration object. From a pretrained_model_name_or_path, resolve to a dictionary of parameters, to be used for instantiating a PretrainedConfig using from_dict. register_for_auto_class < source > ( auto_class = 'AutoConfig' ) Parameters auto_class (str or type, optional, defaults to "AutoConfig") — The auto class to register this new configuration with. Register this class with a given auto class. This should only be used for custom configurations as the ones in the library are already mapped with AutoConfig. This API is experimental and may have some slight breaking changes in the next releases. save_pretrained < source > ( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — Directory where the configuration JSON file will be saved (will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method. Save a configuration object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method. to_dict < source > ( ) → Dict[str, Any] Dictionary of all the attributes that make up this configuration instance. Serializes this instance to a Python dictionary. to_diff_dict < source > ( ) → Dict[str, Any] Dictionary of all the attributes that make up this configuration instance, Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary. to_json_file < source > ( json_file_path: typing.Union[str, os.PathLike] use_diff: bool = True ) Parameters json_file_path (str or os.PathLike) — Path to the JSON file in which this configuration instance’s parameters will be saved. use_diff (bool, optional, defaults to True) — If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON file. Save this instance to a JSON file. to_json_string < source > ( use_diff: bool = True ) → str Parameters use_diff (bool, optional, defaults to True) — If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON string. String containing all the attributes that make up this configuration instance in JSON format. Serializes this instance to a JSON string. update < source > ( config_dict: typing.Dict[str, typing.Any] ) Parameters config_dict (Dict[str, Any]) — Dictionary of attributes that should be updated for this class. Updates attributes of this class with attributes from config_dict. update_from_string < source > ( update_str: str ) Parameters update_str (str) — String with attributes that should be updated for this class. Updates attributes of this class with attributes from update_str. The expected format is ints, floats and strings as is, and for booleans use true or false. For example: “n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index” The keys to change have to already exist in the config object.
https://huggingface.co/docs/transformers/main_classes/logging
Logging 🤗 Transformers has a centralized logging system, so that you can setup the verbosity of the library easily. Currently the default verbosity of the library is WARNING. To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity to the INFO level. import transformers transformers.logging.set_verbosity_info() You can also use the environment variable TRANSFORMERS_VERBOSITY to override the default verbosity. You can set it to one of the following: debug, info, warning, error, critical. For example: TRANSFORMERS_VERBOSITY=error ./myprogram.py Additionally, some warnings can be disabled by setting the environment variable TRANSFORMERS_NO_ADVISORY_WARNINGS to a true value, like 1. This will disable any warning that is logged using logger.warning_advice. For example: TRANSFORMERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py Here is an example of how to use the same logger as the library in your own module or script: from transformers.utils import logging logging.set_verbosity_info() logger = logging.get_logger("transformers") logger.info("INFO") logger.warning("WARN") All the methods of this logging module are documented below, the main ones are logging.get_verbosity() to get the current level of verbosity in the logger and logging.set_verbosity() to set the verbosity to the level of your choice. In order (from the least verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are: transformers.logging.CRITICAL or transformers.logging.FATAL (int value, 50): only report the most critical errors. transformers.logging.ERROR (int value, 40): only report errors. transformers.logging.WARNING or transformers.logging.WARN (int value, 30): only reports error and warnings. This the default level used by the library. transformers.logging.INFO (int value, 20): reports error, warnings and basic information. transformers.logging.DEBUG (int value, 10): report all information. By default, tqdm progress bars will be displayed during model download. logging.disable_progress_bar() and logging.enable_progress_bar() can be used to suppress or unsuppress this behavior. Base setters transformers.utils.logging.set_verbosity_error < source > ( ) Set the verbosity to the ERROR level. transformers.utils.logging.set_verbosity_warning < source > ( ) Set the verbosity to the WARNING level. transformers.utils.logging.set_verbosity_info < source > ( ) Set the verbosity to the INFO level. transformers.utils.logging.set_verbosity_debug < source > ( ) Set the verbosity to the DEBUG level. Other functions transformers.utils.logging.get_verbosity < source > ( ) → int Return the current level for the 🤗 Transformers’s root logger as an int. 🤗 Transformers has following logging levels: 50: transformers.logging.CRITICAL or transformers.logging.FATAL 40: transformers.logging.ERROR 30: transformers.logging.WARNING or transformers.logging.WARN 20: transformers.logging.INFO 10: transformers.logging.DEBUG transformers.utils.logging.set_verbosity < source > ( verbosity: int ) Parameters verbosity (int) — Logging level, e.g., one of: transformers.logging.CRITICAL or transformers.logging.FATAL transformers.logging.ERROR transformers.logging.WARNING or transformers.logging.WARN transformers.logging.INFO transformers.logging.DEBUG Set the verbosity level for the 🤗 Transformers’s root logger. transformers.utils.logging.get_logger < source > ( name: typing.Optional[str] = None ) Return a logger with the specified name. This function is not supposed to be directly accessed unless you are writing a custom transformers module. transformers.utils.logging.enable_default_handler < source > ( ) Enable the default handler of the HuggingFace Transformers’s root logger. transformers.utils.logging.disable_default_handler < source > ( ) Disable the default handler of the HuggingFace Transformers’s root logger. transformers.utils.logging.enable_explicit_format < source > ( ) Enable explicit formatting for every HuggingFace Transformers’s logger. The explicit formatter is as follows: [LEVELNAME|FILENAME|LINE NUMBER] TIME >> MESSAGE All handlers currently bound to the root logger are affected by this method. transformers.utils.logging.reset_format < source > ( ) Resets the formatting for HuggingFace Transformers’s loggers. All handlers currently bound to the root logger are affected by this method. transformers.utils.logging.enable_progress_bar < source > ( ) Enable tqdm progress bar. transformers.utils.logging.disable_progress_bar < source > ( ) Disable tqdm progress bar.
https://huggingface.co/docs/transformers/main_classes/agent
Agents & Tools Transformers Agents is an experimental API which is subject to change at any time. Results returned by the agents can vary as the APIs or underlying models are prone to change. To learn more about agents and tools make sure to read the introductory guide. This page contains the API docs for the underlying classes. Agents We provide three types of agents: HfAgent uses inference endpoints for opensource models, LocalAgent uses a model of your choice locally and OpenAiAgent uses OpenAI closed models. HfAgent class transformers.HfAgent < source > ( url_endpoint token = None chat_prompt_template = None run_prompt_template = None additional_tools = None ) Parameters url_endpoint (str) — The name of the url endpoint to use. token (str, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). chat_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the chat method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named chat_prompt_template.txt in this repo in this case. run_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the run method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named run_prompt_template.txt in this repo in this case. additional_tools (Tool, list of tools or dictionary with tool values, optional) — Any additional tools to include on top of the default ones. If you pass along a tool with the same name as one of the default tools, that default tool will be overridden. Agent that uses an inference endpoint to generate code. Example: from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Is the following `text` (in Spanish) positive or negative?", text="¡Este es un API muy agradable!") LocalAgent class transformers.LocalAgent < source > ( model tokenizer chat_prompt_template = None run_prompt_template = None additional_tools = None ) Parameters model (PreTrainedModel) — The model to use for the agent. tokenizer (PreTrainedTokenizer) — The tokenizer to use for the agent. chat_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the chat method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named chat_prompt_template.txt in this repo in this case. run_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the run method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named run_prompt_template.txt in this repo in this case. additional_tools (Tool, list of tools or dictionary with tool values, optional) — Any additional tools to include on top of the default ones. If you pass along a tool with the same name as one of the default tools, that default tool will be overridden. Agent that uses a local model and tokenizer to generate code. Example: import torch from transformers import AutoModelForCausalLM, AutoTokenizer, LocalAgent checkpoint = "bigcode/starcoder" model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(checkpoint) agent = LocalAgent(model, tokenizer) agent.run("Draw me a picture of rivers and lakes.") from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — The name of a repo on the Hub or a local path to a folder containing both model and tokenizer. kwargs (Dict[str, Any], optional) — Keyword arguments passed along to from_pretrained(). Convenience method to build a LocalAgent from a pretrained checkpoint. Example: import torch from transformers import LocalAgent agent = LocalAgent.from_pretrained("bigcode/starcoder", device_map="auto", torch_dtype=torch.bfloat16) agent.run("Draw me a picture of rivers and lakes.") OpenAiAgent class transformers.OpenAiAgent < source > ( model = 'text-davinci-003' api_key = None chat_prompt_template = None run_prompt_template = None additional_tools = None ) Parameters model (str, optional, defaults to "text-davinci-003") — The name of the OpenAI model to use. api_key (str, optional) — The API key to use. If unset, will look for the environment variable "OPENAI_API_KEY". chat_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the chat method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named chat_prompt_template.txt in this repo in this case. run_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the run method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named run_prompt_template.txt in this repo in this case. additional_tools (Tool, list of tools or dictionary with tool values, optional) — Any additional tools to include on top of the default ones. If you pass along a tool with the same name as one of the default tools, that default tool will be overridden. Agent that uses the openai API to generate code. The openAI models are used in generation mode, so even for the chat() API, it’s better to use models like "text-davinci-003" over the chat-GPT variant. Proper support for chat-GPT models will come in a next version. Example: from transformers import OpenAiAgent agent = OpenAiAgent(model="text-davinci-003", api_key=xxx) agent.run("Is the following `text` (in Spanish) positive or negative?", text="¡Este es un API muy agradable!") AzureOpenAiAgent class transformers.AzureOpenAiAgent < source > ( deployment_id api_key = None resource_name = None api_version = '2022-12-01' is_chat_model = None chat_prompt_template = None run_prompt_template = None additional_tools = None ) Parameters deployment_id (str) — The name of the deployed Azure openAI model to use. api_key (str, optional) — The API key to use. If unset, will look for the environment variable "AZURE_OPENAI_API_KEY". resource_name (str, optional) — The name of your Azure OpenAI Resource. If unset, will look for the environment variable "AZURE_OPENAI_RESOURCE_NAME". api_version (str, optional, default to "2022-12-01") — The API version to use for this agent. is_chat_mode (bool, optional) — Whether you are using a completion model or a chat model (see note above, chat models won’t be as efficient). Will default to gpt being in the deployment_id or not. chat_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the chat method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named chat_prompt_template.txt in this repo in this case. run_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the run method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named run_prompt_template.txt in this repo in this case. additional_tools (Tool, list of tools or dictionary with tool values, optional) — Any additional tools to include on top of the default ones. If you pass along a tool with the same name as one of the default tools, that default tool will be overridden. Agent that uses Azure OpenAI to generate code. See the official documentation to learn how to deploy an openAI model on Azure The openAI models are used in generation mode, so even for the chat() API, it’s better to use models like "text-davinci-003" over the chat-GPT variant. Proper support for chat-GPT models will come in a next version. Example: from transformers import AzureOpenAiAgent agent = AzureAiAgent(deployment_id="Davinci-003", api_key=xxx, resource_name=yyy) agent.run("Is the following `text` (in Spanish) positive or negative?", text="¡Este es un API muy agradable!") Agent class transformers.Agent < source > ( chat_prompt_template = None run_prompt_template = None additional_tools = None ) Parameters chat_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the chat method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named chat_prompt_template.txt in this repo in this case. run_prompt_template (str, optional) — Pass along your own prompt if you want to override the default template for the run method. Can be the actual prompt template or a repo ID (on the Hugging Face Hub). The prompt should be in a file named run_prompt_template.txt in this repo in this case. additional_tools (Tool, list of tools or dictionary with tool values, optional) — Any additional tools to include on top of the default ones. If you pass along a tool with the same name as one of the default tools, that default tool will be overridden. Base class for all agents which contains the main API methods. chat < source > ( task return_code = False remote = False **kwargs ) Parameters task (str) — The task to perform return_code (bool, optional, defaults to False) — Whether to just return code and not evaluate it. remote (bool, optional, defaults to False) — Whether or not to use remote tools (inference endpoints) instead of local ones. kwargs (additional keyword arguments, optional) — Any keyword argument to send to the agent when evaluating the code. Sends a new request to the agent in a chat. Will use the previous ones in its history. Example: from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.chat("Draw me a picture of rivers and lakes") agent.chat("Transform the picture so that there is a rock in there") run < source > ( task return_code = False remote = False **kwargs ) Parameters task (str) — The task to perform return_code (bool, optional, defaults to False) — Whether to just return code and not evaluate it. remote (bool, optional, defaults to False) — Whether or not to use remote tools (inference endpoints) instead of local ones. kwargs (additional keyword arguments, optional) — Any keyword argument to send to the agent when evaluating the code. Sends a request to the agent. Example: from transformers import HfAgent agent = HfAgent("https://api-inference.huggingface.co/models/bigcode/starcoder") agent.run("Draw me a picture of rivers and lakes") Clears the history of prior calls to chat(). Tools load_tool transformers.load_tool < source > ( task_or_repo_id model_repo_id = None remote = False token = None **kwargs ) Parameters task_or_repo_id (str) — The task for which to load the tool or a repo ID of a tool on the Hub. Tasks implemented in Transformers are: "document-question-answering" "image-captioning" "image-question-answering" "image-segmentation" "speech-to-text" "summarization" "text-classification" "text-question-answering" "text-to-speech" "translation" model_repo_id (str, optional) — Use this argument to use a different model than the default one for the tool you selected. remote (bool, optional, defaults to False) — Whether to use your tool by downloading the model or (if it is available) with an inference endpoint. token (str, optional) — The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). kwargs (additional keyword arguments, optional) — Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir, revision, subfolder) will be used when downloading the files for your tool, and the others will be passed along to its init. Main function to quickly load a tool, be it on the Hub or in the Transformers library. Tool class transformers.Tool < source > ( *args **kwargs ) A base class for the functions used by the agent. Subclass this and implement the __call__ method as well as the following class attributes: description (str) — A short description of what your tool does, the inputs it expects and the output(s) it will return. For instance ‘This is a tool that downloads a file from a url. It takes the url as input, and returns the text contained in the file’. name (str) — A performative name that will be used for your tool in the prompt to the agent. For instance "text-classifier" or "image_generator". inputs (List[str]) — The list of modalities expected for the inputs (in the same order as in the call). Modalitiies should be "text", "image" or "audio". This is only used by launch_gradio_demo or to make a nice space from your tool. outputs (List[str]) — The list of modalities returned but the tool (in the same order as the return of the call method). Modalitiies should be "text", "image" or "audio". This is only used by launch_gradio_demo or to make a nice space from your tool. You can also override the method setup() if your tool as an expensive operation to perform before being usable (such as loading a model). setup() will be called the first time you use your tool, but not at instantiation. Creates a Tool from a gradio tool. from_hub < source > ( repo_id: str model_repo_id: typing.Optional[str] = None token: typing.Optional[str] = None remote: bool = False **kwargs ) Parameters repo_id (str) — The name of the repo on the Hub where your tool is defined. model_repo_id (str, optional) — If your tool uses a model and you want to use a different model than the default, you can pass a second repo ID or an endpoint url to this argument. token (str, optional) — The token to identify you on hf.co. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). remote (bool, optional, defaults to False) — Whether to use your tool by downloading the model or (if it is available) with an inference endpoint. kwargs (additional keyword arguments, optional) — Additional keyword arguments that will be split in two: all arguments relevant to the Hub (such as cache_dir, revision, subfolder) will be used when downloading the files for your tool, and the others will be passed along to its init. Loads a tool defined on the Hub. push_to_hub < source > ( repo_id: str commit_message: str = 'Upload tool' private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None create_pr: bool = False ) Parameters repo_id (str) — The name of the repository you want to push your tool to. It should contain your organization name when pushing to a given organization. commit_message (str, optional, defaults to "Upload tool") — Message to commit while pushing. private (bool, optional) — Whether or not the repository created should be private. token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. Upload the tool to the Hub. save < source > ( output_dir ) Parameters output_dir (str) — The folder in which you want to save your tool. Saves the relevant code files for your tool so it can be pushed to the Hub. This will copy the code of your tool in output_dir as well as autogenerate: a config file named tool_config.json an app.py file so that your tool can be converted to a space a requirements.txt containing the names of the module used by your tool (as detected when inspecting its code) You should only use this method to save tools that are defined in a separate module (not __main__). Overwrite this method here for any operation that is expensive and needs to be executed before you start using your tool. Such as loading a big model. PipelineTool class transformers.PipelineTool < source > ( model = None pre_processor = None post_processor = None device = None device_map = None model_kwargs = None token = None **hub_kwargs ) Parameters model (str or PreTrainedModel, optional) — The name of the checkpoint to use for the model, or the instantiated model. If unset, will default to the value of the class attribute default_checkpoint. pre_processor (str or Any, optional) — The name of the checkpoint to use for the pre-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the value of model if unset. post_processor (str or Any, optional) — The name of the checkpoint to use for the post-processor, or the instantiated pre-processor (can be a tokenizer, an image processor, a feature extractor or a processor). Will default to the pre_processor if unset. device (int, str or torch.device, optional) — The device on which to execute the model. Will default to any accelerator available (GPU, MPS etc…), the CPU otherwise. device_map (str or dict, optional) — If passed along, will be used to instantiate the model. model_kwargs (dict, optional) — Any keyword argument to send to the model instantiation. token (str, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). hub_kwargs (additional keyword arguments, optional) — Any additional keyword argument to send to the methods that will load the data from the Hub. A Tool tailored towards Transformer models. On top of the class attributes of the base class Tool, you will need to specify: model_class (type) — The class to use to load the model in this tool. default_checkpoint (str) — The default checkpoint that should be used when the user doesn’t specify one. pre_processor_class (type, optional, defaults to AutoProcessor) — The class to use to load the pre-processor post_processor_class (type, optional, defaults to AutoProcessor) — The class to use to load the post-processor (when different from the pre-processor). Uses the post_processor to decode the model output. Uses the pre_processor to prepare the inputs for the model. Sends the inputs through the model. Instantiates the pre_processor, model and post_processor if necessary. RemoteTool class transformers.RemoteTool < source > ( endpoint_url = None token = None tool_class = None ) Parameters endpoint_url (str) — The url of the endpoint to use. token (str, optional) — The token to use as HTTP bearer authorization for remote files. If unset, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). tool_class (type, optional) — The corresponding tool_class if this is a remote version of an existing tool. Will help determine when the output should be converted to another type (like images). A Tool that will make requests to an inference endpoint. You can override this method in your custom class of RemoteTool to apply some custom post-processing of the outputs of the endpoint. Prepare the inputs received for the HTTP client sending data to the endpoint. Positional arguments will be matched with the signature of the tool_class if it was provided at instantation. Images will be encoded into bytes. You can override this method in your custom class of RemoteTool. launch_gradio_demo transformers.launch_gradio_demo < source > ( tool_class: Tool ) Parameters tool_class (type) — The class of the tool for which to launch the demo. Launches a gradio demo for a tool. The corresponding tool class needs to properly implement the class attributes inputs and outputs. Agent Types Agents can handle any type of object in-between tools; tools, being completely multimodal, can accept and return text, image, audio, video, among other types. In order to increase compatibility between tools, as well as to correctly render these returns in ipython (jupyter, colab, ipython notebooks, …), we implement wrapper classes around these types. The wrapped objects should continue behaving as initially; a text object should still behave as a string, an image object should still behave as a PIL.Image. These types have three specific purposes: Calling to_raw on the type should return the underlying object Calling to_string on the type should return the object as a string: that can be the string in case of an AgentText but will be the path of the serialized version of the object in other instances Displaying it in an ipython kernel should display the object correctly AgentText class transformers.tools.agent_types.AgentText < source > ( value ) Text type returned by the agent. Behaves as a string. AgentImage class transformers.tools.agent_types.AgentImage < source > ( value ) Image type returned by the agent. Behaves as a PIL.Image. Returns the “raw” version of that object. In the case of an AgentImage, it is a PIL.Image. Returns the stringified version of that object. In the case of an AgentImage, it is a path to the serialized version of the image. AgentAudio class transformers.tools.agent_types.AgentAudio < source > ( value samplerate = 16000 ) Audio type returned by the agent. Returns the “raw” version of that object. It is a torch.Tensor object. Returns the stringified version of that object. In the case of an AgentAudio, it is a path to the serialized version of the audio.
https://huggingface.co/docs/transformers/main_classes/onnx
Exporting 🤗 Transformers models to ONNX 🤗 Transformers provides a transformers.onnx package that enables you to convert model checkpoints to an ONNX graph by leveraging configuration objects. See the guide on exporting 🤗 Transformers models for more details. ONNX Configurations We provide three abstract classes that you should inherit from, depending on the type of model architecture you wish to export: Encoder-based models inherit from OnnxConfig Decoder-based models inherit from OnnxConfigWithPast Encoder-decoder models inherit from OnnxSeq2SeqConfigWithPast OnnxConfig class transformers.onnx.OnnxConfig < source > ( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[transformers.onnx.config.PatchingSpec] = None ) Base class for ONNX exportable model describing metadata on how to export the model through the ONNX format. flatten_output_collection_property < source > ( name: str field: typing.Iterable[typing.Any] ) → (Dict[str, Any]) Outputs with flattened structure and key mapping this new structure. Flatten any potential nested structure expanding the name of the field with the index of the element within the structure. from_model_config < source > ( config: PretrainedConfig task: str = 'default' ) Instantiate a OnnxConfig for a specific model generate_dummy_inputs < source > ( preprocessor: typing.Union[ForwardRef('PreTrainedTokenizerBase'), ForwardRef('FeatureExtractionMixin'), ForwardRef('ImageProcessingMixin')] batch_size: int = -1 seq_length: int = -1 num_choices: int = -1 is_pair: bool = False framework: typing.Optional[transformers.utils.generic.TensorType] = None num_channels: int = 3 image_width: int = 40 image_height: int = 40 sampling_rate: int = 22050 time_duration: float = 5.0 frequency: int = 220 tokenizer: PreTrainedTokenizerBase = None ) Parameters batch_size (int, optional, defaults to -1) — The batch size to export the model for (-1 means dynamic axis). num_choices (int, optional, defaults to -1) — The number of candidate answers provided for multiple choice task (-1 means dynamic axis). seq_length (int, optional, defaults to -1) — The sequence length to export the model for (-1 means dynamic axis). is_pair (bool, optional, defaults to False) — Indicate if the input is a pair (sentence 1, sentence 2) framework (TensorType, optional, defaults to None) — The framework (PyTorch or TensorFlow) that the tokenizer will generate tensors for. num_channels (int, optional, defaults to 3) — The number of channels of the generated images. image_width (int, optional, defaults to 40) — The width of the generated images. image_height (int, optional, defaults to 40) — The height of the generated images. sampling_rate (int, optional defaults to 22050) — The sampling rate for audio data generation. time_duration (float, optional defaults to 5.0) — Total seconds of sampling for audio data generation. frequency (int, optional defaults to 220) — The desired natural frequency of generated audio. Generate inputs to provide to the ONNX exporter for the specific framework generate_dummy_inputs_onnxruntime < source > ( reference_model_inputs: typing.Mapping[str, typing.Any] ) → Mapping[str, Tensor] Parameters reference_model_inputs ([Mapping[str, Tensor]) — Reference inputs for the model. Returns Mapping[str, Tensor] The mapping holding the kwargs to provide to the model’s forward function Generate inputs for ONNX Runtime using the reference model inputs. Override this to run inference with seq2seq models which have the encoder and decoder exported as separate ONNX files. use_external_data_format < source > ( num_parameters: int ) Flag indicating if the model requires using external data format OnnxConfigWithPast class transformers.onnx.OnnxConfigWithPast < source > ( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[transformers.onnx.config.PatchingSpec] = None use_past: bool = False ) fill_with_past_key_values_ < source > ( inputs_or_outputs: typing.Mapping[str, typing.Mapping[int, str]] direction: str inverted_values_shape: bool = False ) Fill the input_or_outputs mapping with past_key_values dynamic axes considering. with_past < source > ( config: PretrainedConfig task: str = 'default' ) Instantiate a OnnxConfig with use_past attribute set to True OnnxSeq2SeqConfigWithPast class transformers.onnx.OnnxSeq2SeqConfigWithPast < source > ( config: PretrainedConfig task: str = 'default' patching_specs: typing.List[transformers.onnx.config.PatchingSpec] = None use_past: bool = False ) ONNX Features Each ONNX configuration is associated with a set of features that enable you to export models for different types of topologies or tasks. FeaturesManager class transformers.onnx.FeaturesManager < source > ( ) check_supported_model_or_raise < source > ( model: typing.Union[ForwardRef('PreTrainedModel'), ForwardRef('TFPreTrainedModel')] feature: str = 'default' ) Check whether or not the model has the requested features. determine_framework < source > ( model: str framework: str = None ) Parameters model (str) — The name of the model to export. framework (str, optional, defaults to None) — The framework to use for the export. See above for priority if none provided. Determines the framework to use for the export. The priority is in the following order: User input via framework. If local checkpoint is provided, use the same framework as the checkpoint. Available framework in environment, with priority given to PyTorch get_config < source > ( model_type: str feature: str ) → OnnxConfig Parameters model_type (str) — The model type to retrieve the config for. feature (str) — The feature to retrieve the config for. config for the combination Gets the OnnxConfig for a model_type and feature combination. get_model_class_for_feature < source > ( feature: str framework: str = 'pt' ) Parameters feature (str) — The feature required. framework (str, optional, defaults to "pt") — The framework to use for the export. Attempts to retrieve an AutoModel class from a feature name. get_model_from_feature < source > ( feature: str model: str framework: str = None cache_dir: str = None ) Parameters feature (str) — The feature required. model (str) — The name of the model to export. framework (str, optional, defaults to None) — The framework to use for the export. See FeaturesManager.determine_framework for the priority should none be provided. Attempts to retrieve a model from a model’s name and the feature to be enabled. get_supported_features_for_model_type < source > ( model_type: str model_name: typing.Optional[str] = None ) Parameters model_type (str) — The model type to retrieve the supported features for. model_name (str, optional) — The name attribute of the model object, only used for the exception message. Tries to retrieve the feature -> OnnxConfig constructor map from the model type.
https://huggingface.co/docs/transformers/main_classes/text_generation
Generation Each framework has a generate method for text generation implemented in their respective GenerationMixin class: PyTorch generate() is implemented in GenerationMixin. TensorFlow generate() is implemented in TFGenerationMixin. Flax/JAX generate() is implemented in FlaxGenerationMixin. Regardless of your framework of choice, you can parameterize the generate method with a GenerationConfig class instance. Please refer to this class for the complete list of generation parameters, which control the behavior of the generation method. To learn how to inspect a model’s generation configuration, what are the defaults, how to change the parameters ad hoc, and how to create and save a customized generation configuration, refer to the text generation strategies guide. The guide also explains how to use related features, like token streaming. GenerationConfig class transformers.GenerationConfig < source > ( **kwargs ) Parameters that control the length of the output max_length (int, optional, defaults to 20) — The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. Its effect is overridden by max_new_tokens, if also set. max_new_tokens (int, optional) — The maximum numbers of tokens to generate, ignoring the number of tokens in the prompt. min_length (int, optional, defaults to 0) — The minimum length of the sequence to be generated. Corresponds to the length of the input prompt + min_new_tokens. Its effect is overridden by min_new_tokens, if also set. min_new_tokens (int, optional) — The minimum numbers of tokens to generate, ignoring the number of tokens in the prompt. early_stopping (bool or str, optional, defaults to False) — Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: True, where the generation stops as soon as there are num_beams complete candidates; False, where an heuristic is applied and the generation stops when is it very unlikely to find better candidates; "never", where the beam search procedure only stops when there cannot be better candidates (canonical beam search algorithm). max_time(float, optional) — The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed. Parameters that control the generation strategy used do_sample (bool, optional, defaults to False) — Whether or not to use sampling ; use greedy decoding otherwise. num_beams (int, optional, defaults to 1) — Number of beams for beam search. 1 means no beam search. num_beam_groups (int, optional, defaults to 1) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams. this paper for more details. penalty_alpha (float, optional) — The values balance the model confidence and the degeneration penalty in contrastive search decoding. use_cache (bool, optional, defaults to True) — Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding. Parameters for manipulation of the model output logits temperature (float, optional, defaults to 1.0) — The value used to modulate the next token probabilities. top_k (int, optional, defaults to 50) — The number of highest probability vocabulary tokens to keep for top-k-filtering. top_p (float, optional, defaults to 1.0) — If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. typical_p (float, optional, defaults to 1.0) — Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details. epsilon_cutoff (float, optional, defaults to 0.0) — If set to float strictly between 0 and 1, only tokens with a conditional probability greater than epsilon_cutoff will be sampled. In the paper, suggested values range from 3e-4 to 9e-4, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. eta_cutoff (float, optional, defaults to 0.0) — Eta sampling is a hybrid of locally typical sampling and epsilon sampling. If set to float strictly between 0 and 1, a token is only considered if it is greater than either eta_cutoff or sqrt(eta_cutoff) * exp(-entropy(softmax(next_token_logits))). The latter term is intuitively the expected next token probability, scaled by sqrt(eta_cutoff). In the paper, suggested values range from 3e-4 to 2e-3, depending on the size of the model. See Truncation Sampling as Language Model Desmoothing for more details. diversity_penalty (float, optional, defaults to 0.0) — This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note that diversity_penalty is only effective if group beam search is enabled. repetition_penalty (float, optional, defaults to 1.0) — The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. encoder_repetition_penalty (float, optional, defaults to 1.0) — The paramater for encoder_repetition_penalty. An exponential penalty on sequences that are not in the original input. 1.0 means no penalty. length_penalty (float, optional, defaults to 1.0) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences. no_repeat_ngram_size (int, optional, defaults to 0) — If set to int > 0, all ngrams of that size can only occur once. bad_words_ids(List[List[int]], optional) — List of list of token ids that are not allowed to be generated. Check NoBadWordsLogitsProcessor for further documentation and examples. force_words_ids(List[List[int]] or List[List[List[int]]], optional) — List of token ids that must be generated. If given a List[List[int]], this is treated as a simple list of words that must be included, the opposite to bad_words_ids. If given List[List[List[int]]], this triggers a disjunctive constraint, where one can allow different forms of each word. renormalize_logits (bool, optional, defaults to False) — Whether to renormalize the logits after applying all the logits processors or warpers (including the custom ones). It’s highly recommended to set this flag to True as the search algorithms suppose the score logits are normalized but some logit processors or warpers break the normalization. constraints (List[Constraint], optional) — Custom constraints that can be added to the generation to ensure that the output will contain the use of certain tokens as defined by Constraint objects, in the most sensible way possible. forced_bos_token_id (int, optional, defaults to model.config.forced_bos_token_id) — The id of the token to force as the first generated token after the decoder_start_token_id. Useful for multilingual models like mBART where the first generated token needs to be the target language token. forced_eos_token_id (Union[int, List[int]], optional, defaults to model.config.forced_eos_token_id) — The id of the token to force as the last generated token when max_length is reached. Optionally, use a list to set multiple end-of-sequence tokens. remove_invalid_values (bool, optional, defaults to model.config.remove_invalid_values) — Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation. exponential_decay_length_penalty (tuple(int, float), optional) — This Tuple adds an exponentially increasing length penalty, after a certain amount of tokens have been generated. The tuple shall consist of: (start_index, decay_factor) where start_index indicates where penalty starts and decay_factor represents the factor of exponential decay suppress_tokens (List[int], optional) — A list of tokens that will be suppressed at generation. The SupressTokens logit processor will set their log probs to -inf so that they are not sampled. begin_suppress_tokens (List[int], optional) — A list of tokens that will be suppressed at the beginning of the generation. The SupressBeginTokens logit processor will set their log probs to -inf so that they are not sampled. forced_decoder_ids (List[List[int]], optional) — A list of pairs of integers which indicates a mapping from generation indices to token indices that will be forced before sampling. For example, [[1, 123]] means the second generated token will always be a token of index 123. sequence_bias (Dict[Tuple[int], float], optional)) — Dictionary that maps a sequence of tokens to its bias term. Positive biases increase the odds of the sequence being selected, while negative biases do the opposite. Check SequenceBiasLogitsProcessor for further documentation and examples. guidance_scale (float, optional) — The guidance scale for classifier free guidance (CFG). CFG is enabled by setting guidance_scale > 1. Higher guidance scale encourages the model to generate samples that are more closely linked to the input prompt, usually at the expense of poorer quality. low_memory (bool, optional) — Switch to sequential topk for contrastive search to reduce peak memory. Used with contrastive search. Parameters that define the output variables of `generate` num_return_sequences(int, optional, defaults to 1) — The number of independently computed returned sequences for each element in the batch. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. Special tokens that can be used at generation time pad_token_id (int, optional) — The id of the padding token. bos_token_id (int, optional) — The id of the beginning-of-sequence token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. Generation parameters exclusive to encoder-decoder models encoder_no_repeat_ngram_size (int, optional, defaults to 0) — If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids. decoder_start_token_id (int, optional) — If an encoder-decoder model starts decoding with a different token than bos, the id of that token. Wild card Class that holds a configuration for a generation task. A generate call supports the following generation methods for text-decoder, text-to-text, speech-to-text, and vision-to-text models: greedy decoding by calling greedy_search() if num_beams=1 and do_sample=False contrastive search by calling contrastive_search() if penalty_alpha>0. and top_k>1 multinomial sampling by calling sample() if num_beams=1 and do_sample=True beam-search decoding by calling beam_search() if num_beams>1 and do_sample=False beam-search multinomial sampling by calling beam_sample() if num_beams>1 and do_sample=True diverse beam-search decoding by calling group_beam_search(), if num_beams>1 and num_beam_groups>1 constrained beam-search decoding by calling constrained_beam_search(), if constraints!=None or force_words_ids!=None assisted decoding by calling assisted_decoding(), if assistant_model is passed to .generate() You do not need to call any of the above methods directly. Pass custom parameter values to ‘.generate()‘. To learn more about decoding strategies refer to the text generation strategies guide. from_pretrained < source > ( pretrained_model_name: typing.Union[str, os.PathLike] config_file_name: typing.Union[str, os.PathLike, NoneType] = None cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs ) → GenerationConfig Parameters pretrained_model_name (str or os.PathLike) — This can be either: a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/. config_file_name (str or os.PathLike, optional, defaults to "generation_config.json") — Name of the generation configuration JSON file to be loaded from pretrained_model_name. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the configuration files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. To test a pull request you made on the Hub, you can pass `revision=“refs/pr/“. return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final configuration object. If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored. subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter. The configuration object instantiated from this pretrained model. Instantiate a GenerationConfig from a generation configuration file. Examples: >>> from transformers import GenerationConfig >>> >>> generation_config = GenerationConfig.from_pretrained("gpt2") >>> >>> generation_config.save_pretrained("./test/saved_model/") >>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/") >>> >>> generation_config.save_pretrained("./test/saved_model/", config_file_name="my_configuration.json") >>> generation_config = GenerationConfig.from_pretrained("./test/saved_model/", "my_configuration.json") >>> >>> >>> generation_config, unused_kwargs = GenerationConfig.from_pretrained( ... "gpt2", top_k=1, foo=False, do_sample=True, return_unused_kwargs=True ... ) >>> generation_config.top_k 1 >>> unused_kwargs {'foo': False} from_model_config < source > ( model_config: PretrainedConfig ) → GenerationConfig Parameters model_config (PretrainedConfig) — The model config that will be used to instantiate the generation config. The configuration object instantiated from those parameters. Instantiates a GenerationConfig from a PretrainedConfig. This function is useful to convert legacy PretrainedConfig objects, which may contain generation parameters, into a stand-alone GenerationConfig. save_pretrained < source > ( save_directory: typing.Union[str, os.PathLike] config_file_name: typing.Union[str, os.PathLike, NoneType] = None push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — Directory where the configuration JSON file will be saved (will be created if it does not exist). config_file_name (str or os.PathLike, optional, defaults to "generation_config.json") — Name of the generation configuration JSON file to be saved in save_directory. push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method. Save a generation configuration object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method. GenerationMixin A class containing all functions for auto-regressive text generation, to be used as a mixin in PreTrainedModel. The class exposes generate(), which can be used for: greedy decoding by calling greedy_search() if num_beams=1 and do_sample=False contrastive search by calling contrastive_search() if penalty_alpha>0 and top_k>1 multinomial sampling by calling sample() if num_beams=1 and do_sample=True beam-search decoding by calling beam_search() if num_beams>1 and do_sample=False beam-search multinomial sampling by calling beam_sample() if num_beams>1 and do_sample=True diverse beam-search decoding by calling group_beam_search(), if num_beams>1 and num_beam_groups>1 constrained beam-search decoding by calling constrained_beam_search(), if constraints!=None or force_words_ids!=None You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide. generate < source > ( inputs: typing.Optional[torch.Tensor] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None prefix_allowed_tokens_fn: typing.Union[typing.Callable[[int, torch.Tensor], typing.List[int]], NoneType] = None synced_gpus: typing.Optional[bool] = None assistant_model: typing.Optional[ForwardRef('PreTrainedModel')] = None streamer: typing.Optional[ForwardRef('BaseStreamer')] = None negative_prompt_ids: typing.Optional[torch.Tensor] = None negative_prompt_attention_mask: typing.Optional[torch.Tensor] = None **kwargs ) → ModelOutput or torch.LongTensor Parameters inputs (torch.Tensor of varying shape depending on the modality, optional) — The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should of in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values. generation_config (~generation.GenerationConfig, optional) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. logits_processor (LogitsProcessorList, optional) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. stopping_criteria (StoppingCriteriaList, optional) — Custom stopping criteria that complement the default stopping criteria built from arguments and a generation config. If a stopping criteria is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. prefix_allowed_tokens_fn (Callable[[int, torch.Tensor], List[int]], optional) — If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch ID batch_id and input_ids. It has to return a list with the allowed tokens for the next generation step conditioned on the batch ID batch_id and the previously generated tokens inputs_ids. This argument is useful for constrained generation conditioned on the prefix, as described in Autoregressive Entity Retrieval. synced_gpus (bool, optional) — Whether to continue running the while loop until max_length. Unless overridden this flag will be set to True under DeepSpeed ZeRO Stage 3 multiple GPUs environment to avoid hanging if one GPU finished generating before other GPUs. Otherwise it’ll be set to False. assistant_model (PreTrainedModel, optional) — An assistant model that can be used to accelerate generation. The assistant model must have the exact same tokenizer. The acceleration is achieved when forecasting candidate tokens with the assistent model is much faster than running generation with the model you’re calling generate from. As such, the assistant model should be much smaller. streamer (BaseStreamer, optional) — Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing. negative_prompt_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — The negative prompt needed for some processors such as CFG. The batch size must match the input batch size. This is an experimental feature, subject to breaking API changes in future versions. negative_prompt_attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Attention_mask for negative_prompt_ids. kwargs (Dict[str, Any], optional) — Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns ModelOutput or torch.LongTensor A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a torch.FloatTensor. If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False), the possible ModelOutput types are: GreedySearchDecoderOnlyOutput, SampleDecoderOnlyOutput, BeamSearchDecoderOnlyOutput, BeamSampleDecoderOnlyOutput If the model is an encoder-decoder model (model.config.is_encoder_decoder=True), the possible ModelOutput types are: GreedySearchEncoderDecoderOutput, SampleEncoderDecoderOutput, BeamSearchEncoderDecoderOutput, BeamSampleEncoderDecoderOutput Generates sequences of token ids for models with a language modeling head. Most generation-controlling parameters are set in generation_config which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config by passing the corresponding parameters to generate(), e.g. .generate(inputs, num_beams=4, do_sample=True). For an overview of generation strategies and code examples, check out the following guide. compute_transition_scores < source > ( sequences: Tensor scores: typing.Tuple[torch.Tensor] beam_indices: typing.Optional[torch.Tensor] = None normalize_logits: bool = False ) → torch.Tensor Parameters sequences (torch.LongTensor) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id. scores (tuple(torch.FloatTensor)) — Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens Tuple of torch.FloatTensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size). beam_indices (torch.LongTensor, optional) — Beam indices of generated token id at each generation step. torch.LongTensor of shape (batch_size*num_return_sequences, sequence_length). Only required if a num_beams>1 at generate-time. normalize_logits (bool, optional, defaults to False) — Whether to normalize the logits (which, for legacy reasons, may be unnormalized). A torch.Tensor of shape (batch_size*num_return_sequences, sequence_length) containing the transition scores (logits) Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time. Examples: >>> from transformers import GPT2Tokenizer, AutoModelForCausalLM >>> import numpy as np >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer(["Today is"], return_tensors="pt") >>> >>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits=True ... ) >>> >>> >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip(generated_tokens[0], transition_scores[0]): ... ... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}") | 262 | the | -1.414 | 24.33% | 1110 | day | -2.609 | 7.36% | 618 | when | -2.010 | 13.40% | 356 | we | -1.859 | 15.58% | 460 | can | -2.508 | 8.14% >>> >>> outputs = model.generate( ... **inputs, ... max_new_tokens=5, ... num_beams=4, ... num_return_sequences=4, ... return_dict_in_generate=True, ... output_scores=True, ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False ... ) >>> >>> >>> >>> output_length = input_length + np.sum(transition_scores.numpy() < 0, axis=1) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = transition_scores.sum(axis=1) / (output_length**length_penalty) >>> print(np.allclose(outputs.sequences_scores, reconstructed_scores)) True greedy_search < source > ( input_ids: LongTensor logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None max_length: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: bool = False streamer: typing.Optional[ForwardRef('BaseStreamer')] = None **model_kwargs ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. max_length (int, optional, defaults to 20) — DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) streamer (BaseStreamer, optional) — Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing. model_kwargs — Additional model specific keyword arguments will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. Generates sequences of token ids for models with a language modeling head using greedy decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call greedy_search() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForCausalLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... StoppingCriteriaList, ... MaxLengthCriteria, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> >>> model.generation_config.pad_token_id = model.generation_config.eos_token_id >>> input_prompt = "It might be possible to" >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids >>> >>> logits_processor = LogitsProcessorList( ... [ ... MinLengthLogitsProcessor(10, eos_token_id=model.generation_config.eos_token_id), ... ] ... ) >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)]) >>> outputs = model.greedy_search( ... input_ids, logits_processor=logits_processor, stopping_criteria=stopping_criteria ... ) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ["It might be possible to get a better understanding of the nature of the problem, but it's not"] sample < source > ( input_ids: LongTensor logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None logits_warper: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None max_length: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: bool = False streamer: typing.Optional[ForwardRef('BaseStreamer')] = None **model_kwargs ) → SampleDecoderOnlyOutput, SampleEncoderDecoderOutput or torch.LongTensor Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. logits_warper (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsWarper used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step. max_length (int, optional, defaults to 20) — DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) streamer (BaseStreamer, optional) — Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing. model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. A torch.LongTensor containing the generated tokens (default behaviour) or a SampleDecoderOnlyOutput if model.config.is_encoder_decoder=False and return_dict_in_generate=True or a SampleEncoderDecoderOutput if model.config.is_encoder_decoder=True. Generates sequences of token ids for models with a language modeling head using multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call sample() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForCausalLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... TopKLogitsWarper, ... TemperatureLogitsWarper, ... StoppingCriteriaList, ... MaxLengthCriteria, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> >>> model.config.pad_token_id = model.config.eos_token_id >>> model.generation_config.pad_token_id = model.config.eos_token_id >>> input_prompt = "Today is a beautiful day, and" >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids >>> >>> logits_processor = LogitsProcessorList( ... [ ... MinLengthLogitsProcessor(15, eos_token_id=model.generation_config.eos_token_id), ... ] ... ) >>> >>> logits_warper = LogitsProcessorList( ... [ ... TopKLogitsWarper(50), ... TemperatureLogitsWarper(0.7), ... ] ... ) >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=20)]) >>> torch.manual_seed(0) >>> outputs = model.sample( ... input_ids, ... logits_processor=logits_processor, ... logits_warper=logits_warper, ... stopping_criteria=stopping_criteria, ... ) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Today is a beautiful day, and we must do everything possible to make it a day of celebration.'] beam_search < source > ( input_ids: LongTensor beam_scorer: BeamScorer logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None max_length: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: bool = False **model_kwargs ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. beam_scorer (BeamScorer) — An derived instance of BeamScorer that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation of BeamScorer should be read. logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. max_length (int, optional, defaults to 20) — DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. Generates sequences of token ids for models with a language modeling head using beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call beam_search() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... BeamSearchScorer, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> >>> num_beams = 3 >>> >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()( ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ... ) ... } >>> >>> beam_scorer = BeamSearchScorer( ... batch_size=1, ... num_beams=num_beams, ... device=model.device, ... ) >>> >>> logits_processor = LogitsProcessorList( ... [ ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ... ] ... ) >>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Wie alt bist du?'] beam_sample < source > ( input_ids: LongTensor beam_scorer: BeamScorer logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None logits_warper: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None max_length: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: bool = False **model_kwargs ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. beam_scorer (BeamScorer) — A derived instance of BeamScorer that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation of BeamScorer should be read. logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. logits_warper (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsWarper used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step. max_length (int, optional, defaults to 20) — DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. Generates sequences of token ids for models with a language modeling head using beam search multinomial sampling and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call beam_sample() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... TopKLogitsWarper, ... TemperatureLogitsWarper, ... BeamSearchScorer, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> >>> num_beams = 3 >>> >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()( ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ... ) ... } >>> >>> beam_scorer = BeamSearchScorer( ... batch_size=1, ... max_length=model.config.max_length, ... num_beams=num_beams, ... device=model.device, ... ) >>> >>> logits_processor = LogitsProcessorList( ... [MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id)] ... ) >>> >>> logits_warper = LogitsProcessorList( ... [ ... TopKLogitsWarper(50), ... TemperatureLogitsWarper(0.7), ... ] ... ) >>> outputs = model.beam_sample( ... input_ids, beam_scorer, logits_processor=logits_processor, logits_warper=logits_warper, **model_kwargs ... ) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Wie alt bist du?'] contrastive_search < source > ( input_ids: LongTensor top_k: typing.Optional[int] = 1 penalty_alpha: typing.Optional[float] = 0 logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None logits_warper: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: bool = False streamer: typing.Optional[ForwardRef('BaseStreamer')] = None sequential: typing.Optional[bool] = None **model_kwargs ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. top_k (int, optional, defaults to 1) — The size of the candidate set that is used to re-rank for contrastive search penalty_alpha (float, optional, defaults to 0) — The degeneration penalty for contrastive search; activate when it is larger than 0 logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. logits_warper (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsWarper used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) streamer (BaseStreamer, optional) — Streamer object that will be used to stream the generated sequences. Generated tokens are passed through streamer.put(token_ids) and the streamer is responsible for any further processing. sequential (bool, optional) — Switches topk hidden state computation from parallel to sequential to reduce memory if True. model_kwargs — Additional model specific keyword arguments will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. Generates sequences of token ids for models with a language modeling head using contrastive search and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call contrastive_search() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForCausalLM, ... StoppingCriteriaList, ... MaxLengthCriteria, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("facebook/opt-125m") >>> model = AutoModelForCausalLM.from_pretrained("facebook/opt-125m") >>> >>> model.config.pad_token_id = model.config.eos_token_id >>> input_prompt = "DeepMind Company is" >>> input_ids = tokenizer(input_prompt, return_tensors="pt") >>> stopping_criteria = StoppingCriteriaList([MaxLengthCriteria(max_length=64)]) >>> outputs = model.contrastive_search( ... **input_ids, penalty_alpha=0.6, top_k=4, stopping_criteria=stopping_criteria ... ) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['DeepMind Company is a company that focuses on the development and commercialization of artificial intelligence (AI). DeepMind’s mission is to help people understand and solve problems that are difficult to solve in the world today.\n\nIn this post, we talk about the benefits of deep learning in business and how it'] group_beam_search < source > ( input_ids: LongTensor beam_scorer: BeamScorer logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None max_length: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: bool = False **model_kwargs ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. beam_scorer (BeamScorer) — An derived instance of BeamScorer that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation of BeamScorer should be read. logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. max_length (int, optional, defaults to 20) — DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) model_kwargs — Additional model specific kwargs that will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. Generates sequences of token ids for models with a language modeling head using diverse beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call group_beam_search() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... HammingDiversityLogitsProcessor, ... BeamSearchScorer, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> >>> num_beams = 6 >>> >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()( ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ... ) ... } >>> >>> beam_scorer = BeamSearchScorer( ... batch_size=1, ... max_length=model.config.max_length, ... num_beams=num_beams, ... device=model.device, ... num_beam_groups=3, ... ) >>> >>> logits_processor = LogitsProcessorList( ... [ ... HammingDiversityLogitsProcessor(5.5, num_beams=6, num_beam_groups=3), ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ... ] ... ) >>> outputs = model.group_beam_search( ... input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs ... ) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Wie alt bist du?'] constrained_beam_search < source > ( input_ids: LongTensor constrained_beam_scorer: ConstrainedBeamSearchScorer logits_processor: typing.Optional[transformers.generation.logits_process.LogitsProcessorList] = None stopping_criteria: typing.Optional[transformers.generation.stopping_criteria.StoppingCriteriaList] = None max_length: typing.Optional[int] = None pad_token_id: typing.Optional[int] = None eos_token_id: typing.Union[int, typing.List[int], NoneType] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_scores: typing.Optional[bool] = None return_dict_in_generate: typing.Optional[bool] = None synced_gpus: typing.Optional[bool] = None **model_kwargs ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. constrained_beam_scorer (ConstrainedBeamSearchScorer) — A derived instance of BeamScorer that defines how beam hypotheses are constructed, stored and sorted during generation, while satisfying a list of positive constraints. For more information, the documentation of ConstrainedBeamSearchScorer should be read. logits_processor (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsProcessor used to modify the prediction scores of the language modeling head applied at each generation step. stopping_criteria (StoppingCriteriaList, optional) — An instance of StoppingCriteriaList. List of instances of class derived from StoppingCriteria used to tell if the generation loop should stop. logits_warper (LogitsProcessorList, optional) — An instance of LogitsProcessorList. List of instances of class derived from LogitsWarper used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step. max_length (int, optional, defaults to 20) — DEPRECATED. Use logits_processor or stopping_criteria directly to cap the number of generated tokens. The maximum length of the sequence to be generated. pad_token_id (int, optional) — The id of the padding token. eos_token_id (Union[int, List[int]], optional) — The id of the end-of-sequence token. Optionally, use a list to set multiple end-of-sequence tokens. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more details. output_hidden_states (bool, optional, defaults to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more details. output_scores (bool, optional, defaults to False) — Whether or not to return the prediction scores. See scores under returned tensors for more details. return_dict_in_generate (bool, optional, defaults to False) — Whether or not to return a ModelOutput instead of a plain tuple. synced_gpus (bool, optional, defaults to False) — Whether to continue running the while loop until max_length (needed for ZeRO stage 3) model_kwargs — Additional model specific kwargs will be forwarded to the forward function of the model. If model is an encoder-decoder model the kwargs should include encoder_outputs. Generates sequences of token ids for models with a language modeling head using constrained beam search decoding and can be used for text-decoder, text-to-text, speech-to-text, and vision-to-text models. In most cases, you do not need to call constrained_beam_search() directly. Use generate() instead. For an overview of generation strategies and code examples, check the following guide. Examples: >>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... ConstrainedBeamSearchScorer, ... PhrasalConstraint, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> >>> num_beams = 3 >>> >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()( ... encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True ... ) ... } >>> constraint_str = "Sie" >>> constraint_token_ids = tokenizer.encode(constraint_str)[:-1] >>> constraints = [PhrasalConstraint(token_ids=constraint_token_ids)] >>> >>> beam_scorer = ConstrainedBeamSearchScorer( ... batch_size=1, num_beams=num_beams, device=model.device, constraints=constraints ... ) >>> >>> logits_processor = LogitsProcessorList( ... [ ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ... ] ... ) >>> outputs = model.constrained_beam_search( ... input_ids, beam_scorer, constraints=constraints, logits_processor=logits_processor, **model_kwargs ... ) >>> tokenizer.batch_decode(outputs, skip_special_tokens=True) ['Wie alt sind Sie?'] TFGenerationMixin class transformers.TFGenerationMixin < source > ( ) A class containing all of the functions supporting generation, to be used as a mixin in TFPreTrainedModel. The class exposes generate(), which can be used for: greedy decoding by calling greedy_search() if num_beams=1 and do_sample=False contrastive search by calling contrastive_search() if penalty_alpha>0 and top_k>1 multinomial sampling by calling sample() if num_beams=1 and do_sample=True beam-search decoding by calling beam_search() if num_beams>1 You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide. generate < source > ( inputs: typing.Optional[tensorflow.python.framework.ops.Tensor] = None generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None logits_processor: typing.Optional[transformers.generation.tf_logits_process.TFLogitsProcessorList] = None seed = None **kwargs ) → ModelOutput or tf.Tensor Parameters inputs (tf.Tensor of varying shape depending on the modality, optional) — The sequence used as a prompt for the generation or as model inputs to the encoder. If None the method initializes it with bos_token_id and a batch size of 1. For decoder-only models inputs should of in the format of input_ids. For encoder-decoder models inputs can represent any of input_ids, input_values, input_features, or pixel_values. generation_config (~generation.GenerationConfig, optional) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. logits_processor (LogitsProcessorList, optional) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. seed (List[int], optional) — Random seed to control sampling, containing two integers, used when do_sample is True. See the seed argument from stateless functions in tf.random. kwargs (Dict[str, Any], optional) — Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Returns ModelOutput or tf.Tensor A ModelOutput (if return_dict_in_generate=True or when config.return_dict_in_generate=True) or a tf.Tensor. If the model is not an encoder-decoder model (model.config.is_encoder_decoder=False), the possible ModelOutput types are: TFGreedySearchDecoderOnlyOutput, TFSampleDecoderOnlyOutput, TFBeamSearchDecoderOnlyOutput, TFBeamSampleDecoderOnlyOutput If the model is an encoder-decoder model (model.config.is_encoder_decoder=True), the possible ModelOutput types are: TFGreedySearchEncoderDecoderOutput, TFSampleEncoderDecoderOutput, TFBeamSearchEncoderDecoderOutput, TFBeamSampleEncoderDecoderOutput Generates sequences of token ids for models with a language modeling head. Most generation-controlling parameters are set in generation_config which, if not passed, will be set to the model’s default generation configuration. You can override any generation_config by passing the corresponding parameters to generate, e.g. .generate(inputs, num_beams=4, do_sample=True). For an overview of generation strategies and code examples, check out the following guide. compute_transition_scores < source > ( sequences: Tensor scores: typing.Tuple[tensorflow.python.framework.ops.Tensor] beam_indices: typing.Optional[tensorflow.python.framework.ops.Tensor] = None normalize_logits: bool = False ) → tf.Tensor Parameters sequences (tf.Tensor) — The generated sequences. The second dimension (sequence_length) is either equal to max_length or shorter if all batches finished early due to the eos_token_id. scores (tuple(tf.Tensor)) — Transition scores for each vocabulary token at each generation step. Beam transition scores consisting of log probabilities of tokens conditioned on log softmax of previously generated tokens Tuple of tf.Tensor with up to max_new_tokens elements (one element for each generated token), with each tensor of shape (batch_size*num_beams, config.vocab_size). beam_indices (tf.Tensor, optional) — Beam indices of generated token id at each generation step. tf.Tensor of shape (batch_size*num_return_sequences, sequence_length). Only required if a num_beams>1 at generate-time. normalize_logits (bool, optional, defaults to False) — Whether to normalize the logits (which, for legacy reasons, may be unnormalized). A tf.Tensor of shape (batch_size*num_return_sequences, sequence_length) containing the transition scores (logits) Computes the transition scores of sequences given the generation scores (and beam indices, if beam search was used). This is a convenient method to quicky obtain the scores of the selected tokens at generation time. Examples: >>> from transformers import GPT2Tokenizer, TFAutoModelForCausalLM >>> import numpy as np >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> model = TFAutoModelForCausalLM.from_pretrained("gpt2") >>> tokenizer.pad_token_id = tokenizer.eos_token_id >>> inputs = tokenizer(["Today is"], return_tensors="tf") >>> >>> outputs = model.generate(**inputs, max_new_tokens=5, return_dict_in_generate=True, output_scores=True) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, normalize_logits=True ... ) >>> >>> >>> input_length = 1 if model.config.is_encoder_decoder else inputs.input_ids.shape[1] >>> generated_tokens = outputs.sequences[:, input_length:] >>> for tok, score in zip(generated_tokens[0], transition_scores[0]): ... ... print(f"| {tok:5d} | {tokenizer.decode(tok):8s} | {score.numpy():.3f} | {np.exp(score.numpy()):.2%}") | 262 | the | -1.413 | 24.33% | 1110 | day | -2.609 | 7.36% | 618 | when | -2.009 | 13.41% | 356 | we | -1.859 | 15.58% | 460 | can | -2.508 | 8.14% >>> >>> outputs = model.generate( ... **inputs, ... max_new_tokens=5, ... num_beams=4, ... num_return_sequences=4, ... return_dict_in_generate=True, ... output_scores=True, ... ) >>> transition_scores = model.compute_transition_scores( ... outputs.sequences, outputs.scores, outputs.beam_indices, normalize_logits=False ... ) >>> >>> >>> >>> output_length = input_length + np.sum(transition_scores.numpy() < 0, axis=1) >>> length_penalty = model.generation_config.length_penalty >>> reconstructed_scores = np.sum(transition_scores, axis=1) / (output_length**length_penalty) >>> print(np.allclose(outputs.sequences_scores, reconstructed_scores)) True FlaxGenerationMixin class transformers.FlaxGenerationMixin < source > ( ) A class containing all functions for auto-regressive text generation, to be used as a mixin in FlaxPreTrainedModel. The class exposes generate(), which can be used for: greedy decoding by calling _greedy_search() if num_beams=1 and do_sample=False multinomial sampling by calling _sample() if num_beams=1 and do_sample=True beam-search decoding by calling _beam_search() if num_beams>1 and do_sample=False You do not need to call any of the above methods directly. Pass custom parameter values to ‘generate’ instead. To learn more about decoding strategies refer to the text generation strategies guide. generate < source > ( input_ids: Array generation_config: typing.Optional[transformers.generation.configuration_utils.GenerationConfig] = None prng_key: typing.Optional[jax.Array] = None trace: bool = True params: typing.Union[typing.Dict[str, jax.Array], NoneType] = None logits_processor: typing.Optional[transformers.generation.flax_logits_process.FlaxLogitsProcessorList] = None **kwargs ) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — The sequence used as a prompt for the generation. generation_config (~generation.GenerationConfig, optional) — The generation configuration to be used as base parametrization for the generation call. **kwargs passed to generate matching the attributes of generation_config will override them. If generation_config is not provided, the default will be used, which had the following loading priority: 1) from the generation_config.json model file, if it exists; 2) from the model configuration. Please note that unspecified parameters will inherit GenerationConfig’s default values, whose documentation should be checked to parameterize generation. trace (bool, optional, defaults to True) — Whether to trace generation. Setting trace=False should only be used for debugging and will lead to a considerably slower runtime. params (Dict[str, jnp.ndarray], optional) — Optionally the model parameters can be passed. Can be useful for parallelized generation. logits_processor (FlaxLogitsProcessorList , optional) — Custom logits processors that complement the default logits processors built from arguments and generation config. If a logit processor is passed that is already created with the arguments or a generation config an error is thrown. This feature is intended for advanced users. kwargs (Dict[str, Any], optional) — Ad hoc parametrization of generate_config and/or additional model-specific kwargs that will be forwarded to the forward function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_. Generates sequences of token ids for models with a language modeling head.
https://huggingface.co/docs/transformers/main_classes/model
Models The base classes PreTrainedModel, TFPreTrainedModel, and FlaxPreTrainedModel implement the common methods for loading/saving a model either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace’s AWS S3 repository). PreTrainedModel and TFPreTrainedModel also implement a few methods which are common among all the models to: resize the input token embeddings when new tokens are added to the vocabulary prune the attention heads of the model. The other methods that are common to each model are defined in ModuleUtilsMixin (for the PyTorch models) and ~modeling_tf_utils.TFModuleUtilsMixin (for the TensorFlow models) or for text generation, GenerationMixin (for the PyTorch models), TFGenerationMixin (for the TensorFlow models) and FlaxGenerationMixin (for the Flax/JAX models). PreTrainedModel class transformers.PreTrainedModel < source > ( config: PretrainedConfig *inputs **kwargs ) Base class for all models. PreTrainedModel takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to: resize the input embeddings, prune heads in the self-attention heads. Class attributes (overridden by derived classes): config_class (PretrainedConfig) — A subclass of PretrainedConfig to use as configuration class for this model architecture. load_tf_weights (Callable) — A python method for loading a TensorFlow checkpoint in a PyTorch model, taking as arguments: model (PreTrainedModel) — An instance of the model on which to load the TensorFlow checkpoint. config (PreTrainedConfig) — An instance of the configuration associated to the model. path (str) — A path to the TensorFlow checkpoint. base_model_prefix (str) — A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model. is_parallelizable (bool) — A flag indicating whether this model supports model parallelization. main_input_name (str) — The name of the principal input to the model (often input_ids for NLP models, pixel_values for vision models and input_values for speech models). push_to_hub < source > ( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs ) Parameters repo_id (str) — The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization. use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise. commit_message (str, optional) — Message to commit while pushing. Will default to "Upload model". private (bool, optional) — Whether or not the repository created should be private. token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified. max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization. revision (str, optional) — Branch to push the uploaded files to. Upload the model file to the 🤗 Model Hub. Examples: from transformers import AutoModel model = AutoModel.from_pretrained("bert-base-cased") model.push_to_hub("my-finetuned-bert") model.push_to_hub("huggingface/my-finetuned-bert") can_generate < source > ( ) → bool Whether this model can generate sequences with .generate(). Returns whether this model can generate sequences with .generate(). Removes the _require_grads_hook. Enables the gradients for the input embeddings. This is useful for fine-tuning adapter weights while keeping the model weights fixed. from_pretrained < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] *model_args config: typing.Union[transformers.configuration_utils.PretrainedConfig, str, os.PathLike, NoneType] = None cache_dir: typing.Union[str, os.PathLike, NoneType] = None ignore_mismatched_sizes: bool = False force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' use_safetensors: bool = None **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike, optional) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. A path or url to a model folder containing a flax checkpoint file in .msgpack format (e.g, ./flax_model/ containing flax_model.msgpack). In this case, from_flax should be set to True. None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict). model_args (sequence of positional arguments, optional) — All remaining positional arguments will be passed to the underlying model’s __init__ method. config (Union[PretrainedConfig, str, os.PathLike], optional) — Can be either: an instance of a class derived from PretrainedConfig, a string or path valid as input to from_pretrained(). Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (Union[str, os.PathLike], optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). from_flax (bool, optional, defaults to False) — Load the model weights from a Flax checkpoint save file (see docstring of pretrained_model_name_or_path argument). ignore_mismatched_sizes (bool, optional, defaults to False) — Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (i.e., do not try to download the model). token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. To test a pull request you made on the Hub, you can pass `revision=“refs/pr/“. mirror (str, optional) — Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information. _fast_init(bool, optional, defaults to True) — Whether or not to disable fast initialization. One should only disable _fast_init to ensure backwards compatibility with transformers.__version__ < 4.6.0 for seeded model initialization. This argument will be removed at the next major version. See pull request 11471 for more information. Parameters for big model inference low_cpu_mem_usage(bool, optional) — Tries to not use more than 1x model size in CPU memory (including peak memory) while loading the model. This is an experimental feature and a subject to change at any moment. torch_dtype (str or torch.dtype, optional) — Override the default torch.dtype and load the model under a specific dtype. The different options are: torch.float16 or torch.bfloat16 or torch.float: load in a specified dtype, ignoring the model’s config.torch_dtype if one exists. If not specified the model will get loaded in torch.float (fp32). "auto" - A torch_dtype entry in the config.json file of the model will be attempted to be used. If this entry isn’t found then next check the dtype of the first weight in the checkpoint that’s of a floating point type and use that as dtype. This will load the model using the dtype it was saved in at the end of the training. It can’t be used as an indicator of how the model was trained. Since it could be trained in one of half precision dtypes, but saved in fp32. For some models the dtype they were trained in is unknown - you may try to check the model’s paper or reach out to the authors and ask them to add this information to the model’s card and to insert the torch_dtype entry in config.json on the hub. device_map (str or Dict[str, Union[int, str, torch.device]] or int or torch.device, optional) — A map that specifies where each submodule should go. It doesn’t need to be refined to each parameter/buffer name, once a given module name is inside, every submodule of it will be sent to the same device. If we only pass the device (e.g., "cpu", "cuda:1", "mps", or a GPU ordinal rank like 1) on which the model will be allocated, the device map will map the entire model to this device. Passing device_map = 0 means put the whole model on GPU 0. To have Accelerate compute the most optimized device_map automatically, set device_map="auto". For more information about each option see designing a device map. max_memory (Dict, optional) — A dictionary device identifier to maximum memory. Will default to the maximum memory available for each GPU and the available CPU RAM if unset. offload_folder (str or os.PathLike, optional) — If the device_map contains any value "disk", the folder where we will offload weights. offload_state_dict (bool, optional) — If True, will temporarily offload the CPU state dict to the hard drive to avoid getting out of CPU RAM if the weight of the CPU state dict + the biggest shard of the checkpoint does not fit. Defaults to True when there is some disk offload. load_in_8bit (bool, optional, defaults to False) — If True, will convert the loaded model into mixed-8bit quantized model. To use this feature please install bitsandbytes (pip install -U bitsandbytes). load_in_4bit (bool, optional, defaults to False) — If True, will convert the loaded model into 4bit precision quantized model. To use this feature install the latest version of bitsandbytes (pip install -U bitsandbytes). quantization_config (Union[QuantizationConfigMixin,Dict], optional) — A dictionary of configuration parameters or a QuantizationConfigMixin object for quantization (e.g bitsandbytes, gptq) subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here. variant (str, optional) — If specified load weights from variant filename, e.g. pytorch_model..bin. variant is ignored when using from_tf or from_flax. use_safetensors (bool, optional, defaults to None) — Whether or not to use safetensors checkpoints. Defaults to None. If not specified and safetensors is not installed, it will be set to False. kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate a pretrained pytorch model from a pre-trained model configuration. The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train(). The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task. The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded. Activate the special “offline-mode” to use this method in a firewalled environment. Examples: >>> from transformers import BertConfig, BertModel >>> >>> model = BertModel.from_pretrained("bert-base-uncased") >>> >>> model = BertModel.from_pretrained("./test/saved_model/") >>> >>> model = BertModel.from_pretrained("bert-base-uncased", output_attentions=True) >>> assert model.config.output_attentions == True >>> >>> config = BertConfig.from_json_file("./tf_model/my_tf_model_config.json") >>> model = BertModel.from_pretrained("./tf_model/my_tf_checkpoint.ckpt.index", from_tf=True, config=config) >>> >>> model = BertModel.from_pretrained("bert-base-uncased", from_flax=True) low_cpu_mem_usage algorithm: This is an experimental function that loads the model using ~1x model size CPU memory Here is how it works: save which state_dict keys we have drop state_dict before the model is created, since the latter takes 1x model size CPU memory after the model has been instantiated switch to the meta device all params/buffers that are going to be replaced from the loaded state_dict load state_dict 2nd time replace the params/buffers from the state_dict Currently, it can’t handle deepspeed ZeRO stage 3 and ignores loading errors get_input_embeddings < source > ( ) → nn.Module A torch module mapping vocabulary to hidden states. Returns the model’s input embeddings. get_memory_footprint < source > ( return_buffers = True ) Parameters return_buffers (bool, optional, defaults to True) — Whether to return the size of the buffer tensors in the computation of the memory footprint. Buffers are tensors that do not require gradients and not registered as parameters. E.g. mean and std in batch norm layers. Please see: https://discuss.pytorch.org/t/what-pytorch-means-by-buffers/120266/2 Get the memory footprint of a model. This will return the memory footprint of the current model in bytes. Useful to benchmark the memory footprint of the current model and design some tests. Solution inspired from the PyTorch discussions: https://discuss.pytorch.org/t/gpu-memory-that-model-uses/56822/2 get_output_embeddings < source > ( ) → nn.Module A torch module mapping hidden states to vocabulary. Returns the model’s output embeddings. Deactivates gradient checkpointing for the current model. Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”. Activates gradient checkpointing for the current model. Note that in other frameworks this feature can be referred to as “activation checkpointing” or “checkpoint activations”. If needed prunes and maybe initializes weights. If using a custom PreTrainedModel, you need to implement any initialization logic in _init_weights. A method executed at the end of each Transformer model initialization, to execute code that needs the model’s modules properly initialized (such as weight initialization). prune_heads < source > ( heads_to_prune: typing.Dict[int, typing.List[int]] ) Parameters heads_to_prune (Dict[int, List[int]]) — Dictionary with keys being selected layer indices (int) and associated values being the list of heads to prune in said layer (list of int). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2. Prunes heads of the base model. register_for_auto_class < source > ( auto_class = 'AutoModel' ) Parameters auto_class (str or type, optional, defaults to "AutoModel") — The auto class to register this new model with. Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class. This API is experimental and may have some slight breaking changes in the next releases. resize_token_embeddings < source > ( new_num_tokens: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None ) → torch.nn.Embedding Parameters new_num_tokens (int, optional) — The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or None, just returns a pointer to the input tokens torch.nn.Embedding module of the model without doing anything. pad_to_multiple_of (int, optional) — If set will pad the embedding matrix to a multiple of the provided value.If new_num_tokens is set to None will just pad the embedding to a multiple of pad_to_multiple_of. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta), or on TPUs which benefit from having sequence lengths be a multiple of 128. For more details about this, or help on choosing the correct value for resizing, refer to this guide: https://docs.nvidia.com/deeplearning/performance/dl-performance-matrix-multiplication/index.html#requirements-tc Returns torch.nn.Embedding Pointer to the input tokens Embeddings Module of the model. Resizes input token embeddings matrix of the model if new_num_tokens != config.vocab_size. Takes care of tying weights embeddings afterwards if the model class has a tie_weights() method. Reverts the transformation from to_bettertransformer() so that the original modeling is used, for example in order to save the model. save_pretrained < source > ( save_directory: typing.Union[str, os.PathLike] is_main_process: bool = True state_dict: typing.Optional[dict] = None save_function: typing.Callable = <function save at 0x7f684dcbdf70> push_to_hub: bool = False max_shard_size: typing.Union[int, str] = '10GB' safe_serialization: bool = False variant: typing.Optional[str] = None token: typing.Union[bool, str, NoneType] = None save_peft_format: bool = True **kwargs ) Parameters save_directory (str or os.PathLike) — Directory to which to save. Will be created if it doesn’t exist. is_main_process (bool, optional, defaults to True) — Whether the process calling this is the main process or not. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, set is_main_process=True only on the main process to avoid race conditions. state_dict (nested dictionary of torch.Tensor) — The state dictionary of the model to save. Will default to self.state_dict(), but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism). save_function (Callable) — The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replace torch.save by another method. push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). max_shard_size (int or str, optional, defaults to "10GB") — The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size. safe_serialization (bool, optional, defaults to False) — Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). variant (str, optional) — If specified, weights are saved in the format pytorch_model..bin. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). save_peft_format (bool, optional, defaults to True) — For backward compatibility with PEFT library, in case adapter weights are attached to the model, all keys of the state dict of adapters needs to be pre-pended with base_model.model. Advanced users can disable this behaviours by setting save_peft_format to False. kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory, so that it can be re-loaded using the from_pretrained() class method. set_input_embeddings < source > ( value: Module ) Parameters value (nn.Module) — A module mapping vocabulary to hidden states. Set model’s input embeddings. Tie the weights between the input embeddings and the output embeddings. If the torchscript flag is set in the configuration, can’t handle parameter sharing so we are cloning the weights instead. warn_if_padding_and_no_attention_mask < source > ( input_ids attention_mask ) Shows a one-time warning if the input_ids appear to contain padding and no attention mask was given. Large model loading In Transformers 4.20.0, the from_pretrained() method has been reworked to accommodate large models using Accelerate. This requires Accelerate >= 0.9.0 and PyTorch >= 1.9.0. Instead of creating the full model, then loading the pretrained weights inside it (which takes twice the size of the model in RAM, one for the randomly initialized model, one for the weights), there is an option to create the model as an empty shell, then only materialize its parameters when the pretrained weights are loaded. This option can be activated with low_cpu_mem_usage=True. The model is first created on the Meta device (with empty weights) and the state dict is then loaded inside it (shard by shard in the case of a sharded checkpoint). This way the maximum RAM used is the full size of the model only. from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", low_cpu_mem_usage=True) Moreover, you can directly place the model on different devices if it doesn’t fully fit in RAM (only works for inference for now). With device_map="auto", Accelerate will determine where to put each layer to maximize the use of your fastest devices (GPUs) and offload the rest on the CPU, or even the hard drive if you don’t have enough GPU RAM (or CPU RAM). Even if the model is split across several devices, it will run as you would normally expect. When passing a device_map, low_cpu_mem_usage is automatically set to True, so you don’t need to specify it: from transformers import AutoModelForSeq2SeqLM t0pp = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", device_map="auto") You can inspect how the model was split across devices by looking at its hf_device_map attribute: {'shared': 0, 'decoder.embed_tokens': 0, 'encoder': 0, 'decoder.block.0': 0, 'decoder.block.1': 1, 'decoder.block.2': 1, 'decoder.block.3': 1, 'decoder.block.4': 1, 'decoder.block.5': 1, 'decoder.block.6': 1, 'decoder.block.7': 1, 'decoder.block.8': 1, 'decoder.block.9': 1, 'decoder.block.10': 1, 'decoder.block.11': 1, 'decoder.block.12': 1, 'decoder.block.13': 1, 'decoder.block.14': 1, 'decoder.block.15': 1, 'decoder.block.16': 1, 'decoder.block.17': 1, 'decoder.block.18': 1, 'decoder.block.19': 1, 'decoder.block.20': 1, 'decoder.block.21': 1, 'decoder.block.22': 'cpu', 'decoder.block.23': 'cpu', 'decoder.final_layer_norm': 'cpu', 'decoder.dropout': 'cpu', 'lm_head': 'cpu'} You can also write your own device map following the same format (a dictionary layer name to device). It should map all parameters of the model to a given device, but you don’t have to detail where all the submodules of one layer go if that layer is entirely on the same device. For instance, the following device map would work properly for T0pp (as long as you have the GPU memory): device_map = {"shared": 0, "encoder": 0, "decoder": 1, "lm_head": 1} Another way to minimize the memory impact of your model is to instantiate it at a lower precision dtype (like torch.float16) or use direct quantization techniques as described below. Model Instantiation dtype Under Pytorch a model normally gets instantiated with torch.float32 format. This can be an issue if one tries to load a model whose weights are in fp16, since it’d require twice as much memory. To overcome this limitation, you can either explicitly pass the desired dtype using torch_dtype argument: model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype=torch.float16) or, if you want the model to always load in the most optimal memory pattern, you can use the special value "auto", and then dtype will be automatically derived from the model’s weights: model = T5ForConditionalGeneration.from_pretrained("t5", torch_dtype="auto") Models instantiated from scratch can also be told which dtype to use with: config = T5Config.from_pretrained("t5") model = AutoModel.from_config(config) Due to Pytorch design, this functionality is only available for floating dtypes. ModuleUtilsMixin class transformers.modeling_utils.ModuleUtilsMixin < source > ( ) A few utilities for torch.nn.Modules, to be used as a mixin. Add a memory hook before and after each sub-module forward pass to record increase in memory consumption. Increase in memory consumption is stored in a mem_rss_diff attribute for each module and can be reset to zero with model.reset_memory_hooks_state(). estimate_tokens < source > ( input_dict: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] ) → int Parameters inputs (dict) — The model inputs. The total number of tokens. Helper function to estimate the total number of tokens from the model inputs. floating_point_ops < source > ( input_dict: typing.Dict[str, typing.Union[torch.Tensor, typing.Any]] exclude_embeddings: bool = True ) → int Parameters batch_size (int) — The batch size for the forward pass. sequence_length (int) — The number of tokens in each line of the batch. exclude_embeddings (bool, optional, defaults to True) — Whether or not to count embedding and softmax operations. The number of floating-point operations. Get number of (optionally, non-embeddings) floating-point operations for the forward and backward passes of a batch with this transformer model. Default approximation neglects the quadratic dependency on the number of tokens (valid if 12 * d_model << sequence_length) as laid out in this paper section 2.1. Should be overridden for transformers with parameter re-use e.g. Albert or Universal Transformers, or if doing long-range modeling with very high sequence lengths. get_extended_attention_mask < source > ( attention_mask: Tensor input_shape: typing.Tuple[int] device: device = None dtype: torch.float32 = None ) Parameters attention_mask (torch.Tensor) — Mask with ones indicating tokens to attend to, zeros for tokens to ignore. input_shape (Tuple[int]) — The shape of the input to the model. Makes broadcastable attention and causal masks so that future and masked tokens are ignored. get_head_mask < source > ( head_mask: typing.Optional[torch.Tensor] num_hidden_layers: int is_attention_chunked: bool = False ) Parameters head_mask (torch.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional) — The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard). num_hidden_layers (int) — The number of hidden layers in the model. is_attention_chunked (bool, optional, defaults to False) — Whether or not the attentions scores are computed by chunks or not. Prepare the head mask if needed. invert_attention_mask < source > ( encoder_attention_mask: Tensor ) → torch.Tensor Parameters encoder_attention_mask (torch.Tensor) — An attention mask. The inverted attention mask. Invert an attention mask (e.g., switches 0. and 1.). num_parameters < source > ( only_trainable: bool = False exclude_embeddings: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — Whether or not to return only the number of trainable parameters exclude_embeddings (bool, optional, defaults to False) — Whether or not to return only the number of non-embeddings parameters The number of parameters. Get number of (optionally, trainable or non-embeddings) parameters in the module. TFPreTrainedModel class transformers.TFPreTrainedModel < source > ( *args **kwargs ) Base class for all TF models. TFPreTrainedModel takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to: resize the input embeddings, prune heads in the self-attention heads. Class attributes (overridden by derived classes): config_class (PretrainedConfig) — A subclass of PretrainedConfig to use as configuration class for this model architecture. base_model_prefix (str) — A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model. main_input_name (str) — The name of the principal input to the model (often input_ids for NLP models, pixel_values for vision models and input_values for speech models). push_to_hub < source > ( repo_id: str use_temp_dir: Optional[bool] = None commit_message: Optional[str] = None private: Optional[bool] = None max_shard_size: Optional[Union[int, str]] = '10GB' token: Optional[Union[bool, str]] = None use_auth_token: Optional[Union[bool, str]] = None create_pr: bool = False **base_model_card_args ) Parameters repo_id (str) — The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization. use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise. commit_message (str, optional) — Message to commit while pushing. Will default to "Upload model". private (bool, optional) — Whether or not the repository created should be private. token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified. max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. Upload the model files to the 🤗 Model Hub while synchronizing a local clone of the repo in repo_path_or_name. Examples: from transformers import TFAutoModel model = TFAutoModel.from_pretrained("bert-base-cased") model.push_to_hub("my-finetuned-bert") model.push_to_hub("huggingface/my-finetuned-bert") can_generate < source > ( ) → bool Whether this model can generate sequences with .generate(). Returns whether this model can generate sequences with .generate(). compile < source > ( optimizer = 'rmsprop' loss = 'auto_with_warning' metrics = None loss_weights = None weighted_metrics = None run_eagerly = None steps_per_execution = None **kwargs ) This is a thin wrapper that sets the model’s loss output head as the loss if the user does not specify a loss function themselves. create_model_card < source > ( output_dir model_name: str language: Optional[str] = None license: Optional[str] = None tags: Optional[str] = None finetuned_from: Optional[str] = None tasks: Optional[str] = None dataset_tags: Optional[Union[str, List[str]]] = None dataset: Optional[Union[str, List[str]]] = None dataset_args: Optional[Union[str, List[str]]] = None ) Parameters output_dir (str or os.PathLike) — The folder in which to create the model card. model_name (str, optional) — The name of the model. language (str, optional) — The language of the model (if applicable) license (str, optional) — The license of the model. Will default to the license of the pretrained model used, if the original model given to the Trainer comes from a repo on the Hub. tags (str or List[str], optional) — Some tags to be included in the metadata of the model card. finetuned_from (str, optional) — The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo of the original model given to the Trainer (if it comes from the Hub). tasks (str or List[str], optional) — One or several task identifiers, to be included in the metadata of the model card. dataset_tags (str or List[str], optional) — One or several dataset tags, to be included in the metadata of the model card. dataset (str or List[str], optional) — One or several dataset identifiers, to be included in the metadata of the model card. dataset_args (str or List[str], optional) — One or several dataset arguments, to be included in the metadata of the model card. Creates a draft of a model card using the information available to the Trainer. eager_serving < source > ( inputs ) Parameters inputs (Dict[str, tf.Tensor]) — The input of the saved model as a dictionary of tensors. Method used for serving the model. This method is deprecated, and will be removed. from_pretrained < source > ( pretrained_model_name_or_path: Optional[Union[str, os.PathLike]] *model_args config: Optional[Union[PretrainedConfig, str, os.PathLike]] = None cache_dir: Optional[Union[str, os.PathLike]] = None ignore_mismatched_sizes: bool = False force_download: bool = False local_files_only: bool = False token: Optional[Union[str, bool]] = None revision: str = 'main' **kwargs ) Parameters pretrained_model_name_or_path (str, optional) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. None if you are both providing the configuration and state dictionary (resp. with keyword arguments config and state_dict). model_args (sequence of positional arguments, optional) — All remaining positional arguments will be passed to the underlying model’s __init__ method. config (Union[PretrainedConfig, str], optional) — Can be either: an instance of a class derived from PretrainedConfig, a string valid as input to from_pretrained(). Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch state_dict save file (see docstring of pretrained_model_name_or_path argument). ignore_mismatched_sizes (bool, optional, defaults to False) — Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels). cache_dir (str, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies — (Dict[str, str], optional): A dictionary of proxy servers to use by protocol or endpoint, e.g., {‘http’: ‘foo.bar:3128’, ‘http://hostname’: ‘foo.bar:4012’}. The proxies are used on each request. output_loading_info(bool, *optional*, defaults to False`): Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. Instantiate a pretrained TF 2.0 model from a pre-trained model configuration. The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task. The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded. Examples: >>> from transformers import BertConfig, TFBertModel >>> >>> model = TFBertModel.from_pretrained("bert-base-uncased") >>> >>> model = TFBertModel.from_pretrained("./test/saved_model/") >>> >>> model = TFBertModel.from_pretrained("bert-base-uncased", output_attentions=True) >>> assert model.config.output_attentions == True >>> >>> config = BertConfig.from_json_file("./pt_model/my_pt_model_config.json") >>> model = TFBertModel.from_pretrained("./pt_model/my_pytorch_model.bin", from_pt=True, config=config) get_bias < source > ( ) → tf.Variable The weights representing the bias, None if not an LM model. Dict of bias attached to an LM head. The key represents the name of the bias attribute. get_head_mask < source > ( head_mask: tf.Tensor | None num_hidden_layers: int ) Parameters head_mask (tf.Tensor with shape [num_heads] or [num_hidden_layers x num_heads], optional) — The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard). num_hidden_layers (int) — The number of hidden layers in the model. Prepare the head mask if needed. get_input_embeddings < source > ( ) → tf.Variable The embeddings layer mapping vocabulary to hidden states. Returns the model’s input embeddings layer. get_lm_head < source > ( ) → tf.keras.layers.Layer Returns tf.keras.layers.Layer The LM head layer if the model has one, None if not. The LM Head layer. This method must be overwritten by all the models that have a lm head. get_output_embeddings < source > ( ) → tf.Variable The new weights mapping vocabulary to hidden states. Returns the model’s output embeddings get_output_layer_with_bias < source > ( ) → tf.keras.layers.Layer Returns tf.keras.layers.Layer The layer that handles the bias, None if not an LM model. Get the layer that handles a bias attribute in case the model has an LM head with weights tied to the embeddings get_prefix_bias_name < source > ( ) → str The _prefix name of the bias. Get the concatenated _prefix name of the bias from the model name to the parent layer load_repo_checkpoint < source > ( repo_path_or_name ) → dict Parameters repo_path_or_name (str) — Can either be a repository name for your {object} in the Hub or a path to a local folder (in which case the repository will have the name of that local folder). A dictionary of extra metadata from the checkpoint, most commonly an “epoch” count. Loads a saved checkpoint (model weights and optimizer state) from a repo. Returns the current epoch count when the checkpoint was made. prepare_tf_dataset < source > ( dataset: 'datasets.Dataset' batch_size: int = 8 shuffle: bool = True tokenizer: Optional['PreTrainedTokenizerBase'] = None collate_fn: Optional[Callable] = None collate_fn_args: Optional[Dict[str, Any]] = None drop_remainder: Optional[bool] = None prefetch: bool = True ) → Dataset Parameters dataset (Any) — A [~datasets.Dataset] to be wrapped as a tf.data.Dataset. batch_size (int, defaults to 8) — The size of batches to return. shuffle (bool, defaults to True) — Whether to return samples from the dataset in random order. Usually True for training datasets and False for validation/test datasets. tokenizer (PreTrainedTokenizerBase, optional) — A PreTrainedTokenizer that will be used to pad samples to create batches. Has no effect if a specific collate_fn is passed instead. collate_fn (Callable, optional) — A function that collates samples from the dataset into a single batch. Defaults to DefaultDataCollator if no tokenizer is supplied or DataCollatorWithPadding if a tokenizer is passed. collate_fn_args (Dict[str, Any], optional) — A dict of arguments to pass to the collate_fn alongside the list of samples. drop_remainder (bool, optional) — Whether to drop the final batch, if the batch_size does not evenly divide the dataset length. Defaults to the same setting as shuffle. prefetch (bool, defaults to True) — Whether to add prefetching to the end of the tf.data pipeline. This is almost always beneficial for performance, but can be disabled in edge cases. A tf.data.Dataset which is ready to pass to the Keras API. Wraps a HuggingFace Dataset as a tf.data.Dataset with collation and batching. This method is designed to create a “ready-to-use” dataset that can be passed directly to Keras methods like fit() without further modification. The method will drop columns from the dataset if they don’t match input names for the model. If you want to specify the column names to return rather than using the names that match this model, we recommend using Dataset.to_tf_dataset() instead. prune_heads < source > ( heads_to_prune ) Parameters heads_to_prune (Dict[int, List[int]]) — Dictionary with keys being selected layer indices (int) and associated values being the list of heads to prune in said layer (list of int). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2. Prunes heads of the base model. register_for_auto_class < source > ( auto_class = 'TFAutoModel' ) Parameters auto_class (str or type, optional, defaults to "TFAutoModel") — The auto class to register this new model with. Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class. This API is experimental and may have some slight breaking changes in the next releases. resize_token_embeddings < source > ( new_num_tokens: Optional[int] = None ) → tf.Variable or tf.keras.layers.Embedding Parameters new_num_tokens (int, optional) — The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided or None, just returns a pointer to the input tokens without doing anything. Returns tf.Variable or tf.keras.layers.Embedding Pointer to the input tokens of the model. Resizes input token embeddings matrix of the model if new_num_tokens != config.vocab_size. Takes care of tying weights embeddings afterwards if the model class has a tie_weights() method. save_pretrained < source > ( save_directory saved_model = False version = 1 push_to_hub = False signatures = None max_shard_size: Union[int, str] = '10GB' create_pr: bool = False safe_serialization: bool = False token: Optional[Union[str, bool]] = None **kwargs ) Parameters save_directory (str) — Directory to which to save. Will be created if it doesn’t exist. saved_model (bool, optional, defaults to False) — If the model has to be saved in saved model format as well or not. version (int, optional, defaults to 1) — The version of the saved model. A saved model needs to be versioned in order to be properly loaded by TensorFlow Serving as detailed in the official documentation https://www.tensorflow.org/tfx/serving/serving_basic push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). signatures (dict or tf.function, optional) — Model’s signature used for serving. This will be passed to the signatures argument of model.save(). max_shard_size (int or str, optional, defaults to "10GB") — The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size. create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to False) — Whether to save the model using safetensors or the traditional PyTorch way (that uses pickle). token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory, so that it can be re-loaded using the from_pretrained() class method. serving ( inputs ) Parameters Method used for serving the model. Does not have a specific signature, but will be specialized as concrete — functions when saving with save_pretrained. — inputs (Dict[str, tf.Tensor]): The input of the saved model as a dictionary of tensors. Prepare the output of the saved model. Can be overridden if specific serving modifications are required. set_bias < source > ( value ) Parameters value (Dict[tf.Variable]) — All the new bias attached to an LM head. Set all the bias in the LM head. set_input_embeddings < source > ( value ) Parameters value (tf.Variable) — The new weights mapping hidden states to vocabulary. Set model’s input embeddings set_output_embeddings < source > ( value ) Parameters value (tf.Variable) — The new weights mapping hidden states to vocabulary. Set model’s output embeddings A modification of Keras’s default train_step that correctly handles matching outputs to labels for our models and supports directly training on the loss output head. In addition, it ensures input keys are copied to the labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure that they are available to the model during the forward pass. A modification of Keras’s default train_step that correctly handles matching outputs to labels for our models and supports directly training on the loss output head. In addition, it ensures input keys are copied to the labels where appropriate. It will also copy label keys into the input dict when using the dummy loss, to ensure that they are available to the model during the forward pass. TFModelUtilsMixin class transformers.modeling_tf_utils.TFModelUtilsMixin < source > ( ) A few utilities for tf.keras.Model, to be used as a mixin. num_parameters < source > ( only_trainable: bool = False ) → int Parameters only_trainable (bool, optional, defaults to False) — Whether or not to return only the number of trainable parameters The number of parameters. Get the number of (optionally, trainable) parameters in the model. FlaxPreTrainedModel class transformers.FlaxPreTrainedModel < source > ( config: PretrainedConfig module: Module input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True ) Base class for all models. FlaxPreTrainedModel takes care of storing the configuration of the models and handles methods for loading, downloading and saving models. Class attributes (overridden by derived classes): config_class (PretrainedConfig) — A subclass of PretrainedConfig to use as configuration class for this model architecture. base_model_prefix (str) — A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model. main_input_name (str) — The name of the principal input to the model (often input_ids for NLP models, pixel_values for vision models and input_values for speech models). push_to_hub < source > ( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs ) Parameters repo_id (str) — The name of the repository you want to push your model to. It should contain your organization name when pushing to a given organization. use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise. commit_message (str, optional) — Message to commit while pushing. Will default to "Upload model". private (bool, optional) — Whether or not the repository created should be private. token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified. max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization. revision (str, optional) — Branch to push the uploaded files to. Upload the model checkpoint to the 🤗 Model Hub. Examples: from transformers import FlaxAutoModel model = FlaxAutoModel.from_pretrained("bert-base-cased") model.push_to_hub("my-finetuned-bert") model.push_to_hub("huggingface/my-finetuned-bert") Returns whether this model can generate sequences with .generate(). Returns: bool: Whether this model can generate sequences with .generate(). from_pretrained < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike] dtype: dtype = <class 'jax.numpy.float32'> *model_args config: typing.Union[transformers.configuration_utils.PretrainedConfig, str, os.PathLike, NoneType] = None cache_dir: typing.Union[str, os.PathLike, NoneType] = None ignore_mismatched_sizes: bool = False force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a pt index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_pt should be set to True. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). model_args (sequence of positional arguments, optional) — All remaining positional arguments will be passed to the underlying model’s __init__ method. config (Union[PretrainedConfig, str, os.PathLike], optional) — Can be either: an instance of a class derived from PretrainedConfig, a string or path valid as input to from_pretrained(). Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (Union[str, os.PathLike], optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). ignore_mismatched_sizes (bool, optional, defaults to False) — Whether or not to raise an error if some of the weights from the checkpoint do not have the same size as the weights of the model (if for instance, you are instantiating a model with 10 labels from a checkpoint with 3 labels). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (i.e., do not try to download the model). token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. Instantiate a pretrained flax model from a pre-trained model configuration. The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task. The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded. Examples: >>> from transformers import BertConfig, FlaxBertModel >>> >>> model = FlaxBertModel.from_pretrained("bert-base-cased") >>> >>> model = FlaxBertModel.from_pretrained("./test/saved_model/") >>> >>> config = BertConfig.from_json_file("./pt_model/config.json") >>> model = FlaxBertModel.from_pretrained("./pt_model/pytorch_model.bin", from_pt=True, config=config) load_flax_sharded_weights < source > ( shard_files ) → Dict Parameters shard_files (List[str] — The list of shard files to load. A nested dictionary of the model parameters, in the expected format for flax models : {'model': {'params': {'...'}}}. This is the same as flax.serialization.from_bytes (https:lax.readthedocs.io/en/latest/_modules/flax/serialization.html#from_bytes) but for a sharded checkpoint. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model. register_for_auto_class < source > ( auto_class = 'FlaxAutoModel' ) Parameters auto_class (str or type, optional, defaults to "FlaxAutoModel") — The auto class to register this new model with. Register this class with a given auto class. This should only be used for custom models as the ones in the library are already mapped with an auto class. This API is experimental and may have some slight breaking changes in the next releases. save_pretrained < source > ( save_directory: typing.Union[str, os.PathLike] params = None push_to_hub = False max_shard_size = '10GB' token: typing.Union[bool, str, NoneType] = None **kwargs ) Parameters save_directory (str or os.PathLike) — Directory to which to save. Will be created if it doesn’t exist. push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). max_shard_size (int or str, optional, defaults to "10GB") — The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). If a single weight of the model is bigger than max_shard_size, it will be in its own checkpoint shard which will be bigger than max_shard_size. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method. Save a model and its configuration file to a directory, so that it can be re-loaded using the [from_pretrained()](/docs/transformers/v4.34.0/en/main_classes/model#transformers.FlaxPreTrainedModel.from_pretrained) class method to_bf16 < source > ( params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] mask: typing.Any = None ) Parameters params (Union[Dict, FrozenDict]) — A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — A PyTree with same structure as the params tree. The leaves should be booleans, True for params you want to cast, and should be False for those you want to skip. Cast the floating-point params to jax.numpy.bfloat16. This returns a new params tree and does not cast the params in place. This method can be used on TPU to explicitly convert the model parameters to bfloat16 precision to do full half-precision training or to save weights in bfloat16 for inference in order to save memory and improve speed. Examples: >>> from transformers import FlaxBertModel >>> >>> model = FlaxBertModel.from_pretrained("bert-base-cased") >>> >>> model.params = model.to_bf16(model.params) >>> >>> >>> from flax import traverse_util >>> model = FlaxBertModel.from_pretrained("bert-base-cased") >>> flat_params = traverse_util.flatten_dict(model.params) >>> mask = { ... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) ... for path in flat_params ... } >>> mask = traverse_util.unflatten_dict(mask) >>> model.params = model.to_bf16(model.params, mask) to_fp16 < source > ( params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] mask: typing.Any = None ) Parameters params (Union[Dict, FrozenDict]) — A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — A PyTree with same structure as the params tree. The leaves should be booleans, True for params you want to cast, and should be False for those you want to skip Cast the floating-point parmas to jax.numpy.float16. This returns a new params tree and does not cast the params in place. This method can be used on GPU to explicitly convert the model parameters to float16 precision to do full half-precision training or to save weights in float16 for inference in order to save memory and improve speed. Examples: >>> from transformers import FlaxBertModel >>> >>> model = FlaxBertModel.from_pretrained("bert-base-cased") >>> >>> model.params = model.to_fp16(model.params) >>> >>> >>> from flax import traverse_util >>> model = FlaxBertModel.from_pretrained("bert-base-cased") >>> flat_params = traverse_util.flatten_dict(model.params) >>> mask = { ... path: (path[-2] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) ... for path in flat_params ... } >>> mask = traverse_util.unflatten_dict(mask) >>> model.params = model.to_fp16(model.params, mask) to_fp32 < source > ( params: typing.Union[typing.Dict, flax.core.frozen_dict.FrozenDict] mask: typing.Any = None ) Parameters params (Union[Dict, FrozenDict]) — A PyTree of model parameters. mask (Union[Dict, FrozenDict]) — A PyTree with same structure as the params tree. The leaves should be booleans, True for params you want to cast, and should be False for those you want to skip Cast the floating-point parmas to jax.numpy.float32. This method can be used to explicitly convert the model parameters to fp32 precision. This returns a new params tree and does not cast the params in place. Examples: >>> from transformers import FlaxBertModel >>> >>> model = FlaxBertModel.from_pretrained("bert-base-cased") >>> >>> >>> model.params = model.to_f16(model.params) >>> >>> model.params = model.to_fp32(model.params) Pushing to the Hub class transformers.utils.PushToHubMixin < source > ( ) A Mixin containing the functionality to push a model or tokenizer to the hub. push_to_hub < source > ( repo_id: str use_temp_dir: typing.Optional[bool] = None commit_message: typing.Optional[str] = None private: typing.Optional[bool] = None token: typing.Union[bool, str, NoneType] = None max_shard_size: typing.Union[int, str, NoneType] = '10GB' create_pr: bool = False safe_serialization: bool = False revision: str = None **deprecated_kwargs ) Parameters repo_id (str) — The name of the repository you want to push your {object} to. It should contain your organization name when pushing to a given organization. use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise. commit_message (str, optional) — Message to commit while pushing. Will default to "Upload {object}". private (bool, optional) — Whether or not the repository created should be private. token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified. max_shard_size (int or str, optional, defaults to "10GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit. safe_serialization (bool, optional, defaults to False) — Whether or not to convert the model weights in safetensors format for safer serialization. revision (str, optional) — Branch to push the uploaded files to. Upload the {object_files} to the 🤗 Model Hub. Examples: from transformers import {object_class} {object} = {object_class}.from_pretrained("bert-base-cased") {object}.push_to_hub("my-finetuned-bert") {object}.push_to_hub("huggingface/my-finetuned-bert") Sharded checkpoints transformers.modeling_utils.load_sharded_checkpoint < source > ( model folder strict = True prefer_safe = True ) → NamedTuple Parameters model (torch.nn.Module) — The model in which to load the checkpoint. folder (str or os.PathLike) — A path to a folder containing the sharded checkpoint. strict (bool, *optional, defaults to True`) — Whether to strictly enforce that the keys in the model state dict match the keys in the sharded checkpoint. prefer_safe (bool, optional, defaults to False) — If both safetensors and PyTorch save files are present in checkpoint and prefer_safe is True, the safetensors files will be loaded. Otherwise, PyTorch files are always loaded when possible. A named tuple with missing_keys and unexpected_keys fields missing_keys is a list of str containing the missing keys unexpected_keys is a list of str containing the unexpected keys This is the same as torch.nn.Module.load_state_dict but for a sharded checkpoint. This load is performed efficiently: each checkpoint shard is loaded one by one in RAM and deleted after being loaded in the model.
https://huggingface.co/docs/transformers/main_classes/data_collator
Data Collator Data collators are objects that will form a batch by using a list of dataset elements as input. These elements are of the same type as the elements of train_dataset or eval_dataset. To be able to build batches, data collators may apply some processing (like padding). Some of them (like DataCollatorForLanguageModeling) also apply some random data augmentation (like random masking) on the formed batch. Examples of use can be found in the example scripts or example notebooks. Default data collator transformers.default_data_collator < source > ( features: typing.List[InputDataClass] return_tensors = 'pt' ) Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: label: handles a single value (int or float) per object label_ids: handles a list of values per object Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful. DefaultDataCollator class transformers.DefaultDataCollator < source > ( return_tensors: str = 'pt' ) Parameters return_tensors (str) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: label: handles a single value (int or float) per object label_ids: handles a list of values per object Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs to the model. See glue and ner for example of how it’s useful. This is an object (like other data collators) rather than a pure function like default_data_collator. This can be helpful if you need to set a return_tensors value at initialization. DataCollatorWithPadding class transformers.DataCollatorWithPadding < source > ( tokenizer: PreTrainedTokenizerBase padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None return_tensors: str = 'pt' ) Parameters tokenizer (PreTrainedTokenizer or PreTrainedTokenizerFast) — The tokenizer used for encoding the data. padding (bool, str or PaddingStrategy, optional, defaults to True) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest' (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad': No padding (i.e., can output a batch with sequences of different lengths). max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). return_tensors (str) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator that will dynamically pad the inputs received. DataCollatorForTokenClassification class transformers.DataCollatorForTokenClassification < source > ( tokenizer: PreTrainedTokenizerBase padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None label_pad_token_id: int = -100 return_tensors: str = 'pt' ) Parameters tokenizer (PreTrainedTokenizer or PreTrainedTokenizerFast) — The tokenizer used for encoding the data. padding (bool, str or PaddingStrategy, optional, defaults to True) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest' (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad': No padding (i.e., can output a batch with sequences of different lengths). max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). label_pad_token_id (int, optional, defaults to -100) — The id to use when padding the labels (-100 will be automatically ignore by PyTorch loss functions). return_tensors (str) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator that will dynamically pad the inputs received, as well as the labels. DataCollatorForSeq2Seq class transformers.DataCollatorForSeq2Seq < source > ( tokenizer: PreTrainedTokenizerBase model: typing.Optional[typing.Any] = None padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy] = True max_length: typing.Optional[int] = None pad_to_multiple_of: typing.Optional[int] = None label_pad_token_id: int = -100 return_tensors: str = 'pt' ) Parameters tokenizer (PreTrainedTokenizer or PreTrainedTokenizerFast) — The tokenizer used for encoding the data. model (PreTrainedModel) — The model that is being trained. If set and has the prepare_decoder_input_ids_from_labels, use it to prepare the decoder_input_ids This is useful when using label_smoothing to avoid calculating loss twice. padding (bool, str or PaddingStrategy, optional, defaults to True) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest' (default): Pad to the longest sequence in the batch (or no padding if only a single sequence is provided). 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad': No padding (i.e., can output a batch with sequences of different lengths). max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above). pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). label_pad_token_id (int, optional, defaults to -100) — The id to use when padding the labels (-100 will be automatically ignored by PyTorch loss functions). return_tensors (str) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator that will dynamically pad the inputs received, as well as the labels. DataCollatorForLanguageModeling class transformers.DataCollatorForLanguageModeling < source > ( tokenizer: PreTrainedTokenizerBase mlm: bool = True mlm_probability: float = 0.15 pad_to_multiple_of: typing.Optional[int] = None tf_experimental_compile: bool = False return_tensors: str = 'pt' ) Parameters tokenizer (PreTrainedTokenizer or PreTrainedTokenizerFast) — The tokenizer used for encoding the data. mlm (bool, optional, defaults to True) — Whether or not to use masked language modeling. If set to False, the labels are the same as the inputs with the padding tokens ignored (by setting them to -100). Otherwise, the labels are -100 for non-masked tokens and the value to predict for the masked token. mlm_probability (float, optional, defaults to 0.15) — The probability with which to (randomly) mask tokens in the input, when mlm is set to True. pad_to_multiple_of (int, optional) — If set will pad the sequence to a multiple of the provided value. return_tensors (str) — The type of Tensor to return. Allowable values are “np”, “pt” and “tf”. Data collator used for language modeling. Inputs are dynamically padded to the maximum length of a batch if they are not all of the same length. For best performance, this data collator should be used with a dataset having items that are dictionaries or BatchEncoding, with the "special_tokens_mask" key, as returned by a PreTrainedTokenizer or a PreTrainedTokenizerFast with the argument return_special_tokens_mask=True. numpy_mask_tokens < source > ( inputs: typing.Any special_tokens_mask: typing.Optional[typing.Any] = None ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. tf_mask_tokens < source > ( inputs: typing.Any vocab_size mask_token_id special_tokens_mask: typing.Optional[typing.Any] = None ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. torch_mask_tokens < source > ( inputs: typing.Any special_tokens_mask: typing.Optional[typing.Any] = None ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. DataCollatorForWholeWordMask class transformers.DataCollatorForWholeWordMask < source > ( tokenizer: PreTrainedTokenizerBase mlm: bool = True mlm_probability: float = 0.15 pad_to_multiple_of: typing.Optional[int] = None tf_experimental_compile: bool = False return_tensors: str = 'pt' ) Data collator used for language modeling that masks entire words. collates batches of tensors, honoring their tokenizer’s pad_token preprocesses batches for masked language modeling This collator relies on details of the implementation of subword tokenization by BertTokenizer, specifically that subword tokens are prefixed with ##. For tokenizers that do not adhere to this scheme, this collator will produce an output that is roughly equivalent to .DataCollatorForLanguageModeling. numpy_mask_tokens < source > ( inputs: typing.Any mask_labels: typing.Any ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref. tf_mask_tokens < source > ( inputs: typing.Any mask_labels: typing.Any ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref. torch_mask_tokens < source > ( inputs: typing.Any mask_labels: typing.Any ) Prepare masked tokens inputs/labels for masked language modeling: 80% MASK, 10% random, 10% original. Set ‘mask_labels’ means we use whole word mask (wwm), we directly mask idxs according to it’s ref. DataCollatorForPermutationLanguageModeling class transformers.DataCollatorForPermutationLanguageModeling < source > ( tokenizer: PreTrainedTokenizerBase plm_probability: float = 0.16666666666666666 max_span_length: int = 5 return_tensors: str = 'pt' ) Data collator used for permutation language modeling. collates batches of tensors, honoring their tokenizer’s pad_token preprocesses batches for permutation language modeling with procedures specific to XLNet numpy_mask_tokens < source > ( inputs: typing.Any ) The masked tokens to be predicted for a particular sequence are determined by the following algorithm: Start from the beginning of the sequence by setting cur_len = 0 (number of tokens processed so far). Sample a span_length from the interval [1, max_span_length] (length of span of tokens to be masked) Reserve a context of length context_length = span_length / plm_probability to surround span to be masked Sample a starting point start_index from the interval [cur_len, cur_len + context_length - span_length] and mask tokens start_index:start_index + span_length Set cur_len = cur_len + context_length. If cur_len < max_len (i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1. The masked tokens to be predicted for a particular sequence are determined by the following algorithm: Start from the beginning of the sequence by setting cur_len = 0 (number of tokens processed so far). Sample a span_length from the interval [1, max_span_length] (length of span of tokens to be masked) Reserve a context of length context_length = span_length / plm_probability to surround span to be masked Sample a starting point start_index from the interval [cur_len, cur_len + context_length - span_length] and mask tokens start_index:start_index + span_length Set cur_len = cur_len + context_length. If cur_len < max_len (i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1. torch_mask_tokens < source > ( inputs: typing.Any ) The masked tokens to be predicted for a particular sequence are determined by the following algorithm: Start from the beginning of the sequence by setting cur_len = 0 (number of tokens processed so far). Sample a span_length from the interval [1, max_span_length] (length of span of tokens to be masked) Reserve a context of length context_length = span_length / plm_probability to surround span to be masked Sample a starting point start_index from the interval [cur_len, cur_len + context_length - span_length] and mask tokens start_index:start_index + span_length Set cur_len = cur_len + context_length. If cur_len < max_len (i.e. there are tokens remaining in the sequence to be processed), repeat from Step 1.
https://huggingface.co/docs/transformers/model_doc/encodec
EnCodec Overview The EnCodec neural codec model was proposed in High Fidelity Neural Audio Compression by Alexandre Défossez, Jade Copet, Gabriel Synnaeve, Yossi Adi. The abstract from the paper is the following: We introduce a state-of-the-art real-time, high-fidelity, audio codec leveraging neural networks. It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion. We simplify and speed-up the training by using a single multiscale spectrogram adversary that efficiently reduces artifacts and produce high-quality samples. We introduce a novel loss balancer mechanism to stabilize training: the weight of a loss now defines the fraction of the overall gradient it should represent, thus decoupling the choice of this hyper-parameter from the typical scale of the loss. Finally, we study how lightweight Transformer models can be used to further compress the obtained representation by up to 40%, while staying faster than real time. We provide a detailed description of the key design choices of the proposed model including: training objective, architectural changes and a study of various perceptual loss functions. We present an extensive subjective evaluation (MUSHRA tests) together with an ablation study for a range of bandwidths and audio domains, including speech, noisy-reverberant speech, and music. Our approach is superior to the baselines methods across all evaluated settings, considering both 24 kHz monophonic and 48 kHz stereophonic audio. This model was contributed by Matthijs, Patrick Von Platen and Arthur Zucker. The original code can be found here. Here is a quick example of how to encode and decode an audio using this model: >>> from datasets import load_dataset, Audio >>> from transformers import EncodecModel, AutoProcessor >>> librispeech_dummy = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> model = EncodecModel.from_pretrained("facebook/encodec_24khz") >>> processor = AutoProcessor.from_pretrained("facebook/encodec_24khz") >>> librispeech_dummy = librispeech_dummy.cast_column("audio", Audio(sampling_rate=processor.sampling_rate)) >>> audio_sample = librispeech_dummy[-1]["audio"]["array"] >>> inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt") >>> encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"]) >>> audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0] >>> >>> audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values EncodecConfig class transformers.EncodecConfig < source > ( target_bandwidths = [1.5, 3.0, 6.0, 12.0, 24.0] sampling_rate = 24000 audio_channels = 1 normalize = False chunk_length_s = None overlap = None hidden_size = 128 num_filters = 32 num_residual_layers = 1 upsampling_ratios = [8, 5, 4, 2] norm_type = 'weight_norm' kernel_size = 7 last_kernel_size = 7 residual_kernel_size = 3 dilation_growth_rate = 2 use_causal_conv = True pad_mode = 'reflect' compress = 2 num_lstm_layers = 2 trim_right_ratio = 1.0 codebook_size = 1024 codebook_dim = None use_conv_shortcut = True **kwargs ) Parameters target_bandwidths (List[float], optional, defaults to [1.5, 3.0, 6.0, 12.0, 24.0]) — The range of diffent bandwiths the model can encode audio with. sampling_rate (int, optional, defaults to 24000) — The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz). audio_channels (int, optional, defaults to 1) — Number of channels in the audio data. Either 1 for mono or 2 for stereo. normalize (bool, optional, defaults to False) — Whether the audio shall be normalized when passed. chunk_length_s (float, optional) — If defined the audio is pre-processed into chunks of lengths chunk_length_s and then encoded. overlap (float, optional) — Defines the overlap between each chunk. It is used to compute the chunk_stride using the following formulae : int((1.0 - self.overlap) * self.chunk_length). hidden_size (int, optional, defaults to 128) — Intermediate representation dimension. num_filters (int, optional, defaults to 32) — Number of convolution kernels of first EncodecConv1d down sampling layer. num_residual_layers (int, optional, defaults to 1) — Number of residual layers. upsampling_ratios (Sequence[int] , optional, defaults to [8, 5, 4, 2]) — Kernel size and stride ratios. The encoder uses downsampling ratios instead of upsampling ratios, hence it will use the ratios in the reverse order to the ones specified here that must match the decoder order. norm_type (str, optional, defaults to "weight_norm") — Normalization method. Should be in ["weight_norm", "time_group_norm"] kernel_size (int, optional, defaults to 7) — Kernel size for the initial convolution. last_kernel_size (int, optional, defaults to 7) — Kernel size for the last convolution layer. residual_kernel_size (int, optional, defaults to 3) — Kernel size for the residual layers. dilation_growth_rate (int, optional, defaults to 2) — How much to increase the dilation with each layer. use_causal_conv (bool, optional, defaults to True) — Whether to use fully causal convolution. pad_mode (str, optional, defaults to "reflect") — Padding mode for the convolutions. compress (int, optional, defaults to 2) — Reduced dimensionality in residual branches (from Demucs v3). num_lstm_layers (int, optional, defaults to 2) — Number of LSTM layers at the end of the encoder. trim_right_ratio (float, optional, defaults to 1.0) — Ratio for trimming at the right of the transposed convolution under the use_causal_conv = True setup. If equal to 1.0, it means that all the trimming is done at the right. codebook_size (int, optional, defaults to 1024) — Number of discret codes that make up VQVAE. codebook_dim (int, optional) — Dimension of the codebook vectors. If not defined, uses hidden_size. use_conv_shortcut (bool, optional, defaults to True) — Whether to use a convolutional layer as the ‘skip’ connection in the EncodecResnetBlock block. If False, an identity function will be used, giving a generic residual connection. This is the configuration class to store the configuration of an EncodecModel. It is used to instantiate a Encodec model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the facebook/encodec_24khz architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import EncodecModel, EncodecConfig >>> >>> configuration = EncodecConfig() >>> >>> model = EncodecModel(configuration) >>> >>> configuration = model.config EncodecFeatureExtractor ( feature_size: int = 1 sampling_rate: int = 24000 padding_value: float = 0.0 chunk_length_s: float = None overlap: float = None **kwargs ) Parameters feature_size (int, optional, defaults to 1) — The feature dimension of the extracted features. Use 1 for mono, 2 for stereo. sampling_rate (int, optional, defaults to 24000) — The sampling rate at which the audio waveform should be digitalized expressed in hertz (Hz). padding_value (float, optional, defaults to 0.0) — The value that is used to fill the padding values. chunk_length_s (float, optional) — If defined the audio is pre-processed into chunks of lengths chunk_length_s and then encoded. overlap (float, optional) — Defines the overlap between each chunk. It is used to compute the chunk_stride using the following formulae : int((1.0 - self.overlap) * self.chunk_length). Constructs an EnCodec feature extractor. This feature extractor inherits from SequenceFeatureExtractor which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Instantiating a feature extractor with the defaults will yield a similar configuration to that of the facebook/encodec_24khz architecture. ( raw_audio: typing.Union[numpy.ndarray, typing.List[float], typing.List[numpy.ndarray], typing.List[typing.List[float]]] padding: typing.Union[bool, str, transformers.utils.generic.PaddingStrategy, NoneType] = None truncation: typing.Optional[bool] = False max_length: typing.Optional[int] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None sampling_rate: typing.Optional[int] = None ) Parameters raw_audio (np.ndarray, List[float], List[np.ndarray], List[List[float]]) — The sequence or batch of sequences to be processed. Each sequence can be a numpy array, a list of float values, a list of numpy arrays or a list of list of float values. The numpy array must be of shape (num_samples,) for mono audio (feature_size = 1), or (2, num_samples) for stereo audio (feature_size = 2). padding (bool, str or PaddingStrategy, optional, defaults to True) — Select a strategy to pad the returned sequences (according to the model’s padding side and padding index) among: True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths). truncation (bool, optional, defaults to False) — Activates truncation to cut input sequences longer than max_length to max_length. max_length (int, optional) — Maximum length of the returned list and optionally padding length (see above). return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are: 'tf': Return TensorFlow tf.constant objects. 'pt': Return PyTorch torch.Tensor objects. 'np': Return Numpy np.ndarray objects. sampling_rate (int, optional) — The sampling rate at which the audio input was sampled. It is strongly recommended to pass sampling_rate at the forward call to prevent silent errors. Main method to featurize and prepare for the model one or several sequence(s). EncodecModel class transformers.EncodecModel < source > ( config: EncodecConfig ) Parameters config (EncodecConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The EnCodec neural audio codec model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. decode < source > ( audio_codes: Tensor audio_scales: Tensor padding_mask: typing.Optional[torch.Tensor] = None return_dict: typing.Optional[bool] = None ) Parameters audio_codes (torch.FloatTensor of shape (batch_size, nb_chunks, chunk_length), optional) — Discret code embeddings computed using model.encode. audio_scales (torch.Tensor of shape (batch_size, nb_chunks), optional) — Scaling factor for each audio_codes input. padding_mask (torch.Tensor of shape (batch_size, channels, sequence_length)) — Padding mask used to pad the input_values. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Decodes the given frames into an output audio waveform. Note that the output might be a bit bigger than the input. In that case, any extra steps at the end can be trimmed. encode < source > ( input_values: Tensor padding_mask: Tensor = None bandwidth: typing.Optional[float] = None return_dict: typing.Optional[bool] = None ) Parameters input_values (torch.Tensor of shape (batch_size, channels, sequence_length)) — Float values of the input audio waveform. padding_mask (torch.Tensor of shape (batch_size, channels, sequence_length)) — Padding mask used to pad the input_values. bandwidth (float, optional) — The target bandwidth. Must be one of config.target_bandwidths. If None, uses the smallest possible bandwidth. bandwidth is represented as a thousandth of what it is, e.g. 6kbps bandwidth is represented as bandwidth == 6.0 Encodes the input audio waveform into discrete codes. forward < source > ( input_values: Tensor padding_mask: typing.Optional[torch.Tensor] = None bandwidth: typing.Optional[float] = None audio_codes: typing.Optional[torch.Tensor] = None audio_scales: typing.Optional[torch.Tensor] = None return_dict: typing.Optional[bool] = None ) → transformers.models.encodec.modeling_encodec.EncodecOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, channels, sequence_length), optional) — Raw audio input converted to Float and padded to the approriate length in order to be encoded using chunks of length self.chunk_length and a stride of config.chunk_stride. padding_mask (torch.BoolTensor of shape (batch_size, channels, sequence_length), optional) — Mask to avoid computing scaling factors on padding token indices (can we avoid computing conv on these+). Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. padding_mask should always be passed, unless the input was truncated or not padded. This is because in order to process tensors effectively, the input audio should be padded so that input_length % stride = step with step = chunk_length-stride. This ensures that all chunks are of the same shape bandwidth (float, optional) — The target bandwidth. Must be one of config.target_bandwidths. If None, uses the smallest possible bandwidth. bandwidth is represented as a thousandth of what it is, e.g. 6kbps bandwidth is represented as bandwidth == 6.0 audio_codes (torch.FloatTensor of shape (batch_size, nb_chunks, chunk_length), optional) — Discret code embeddings computed using model.encode. audio_scales (torch.Tensor of shape (batch_size, nb_chunks), optional) — Scaling factor for each audio_codes input. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.encodec.modeling_encodec.EncodecOutput or tuple(torch.FloatTensor) A transformers.models.encodec.modeling_encodec.EncodecOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EncodecConfig) and inputs. audio_codes (torch.FloatTensor of shape (batch_size, nb_chunks, chunk_length), optional) — Discret code embeddings computed using model.encode. audio_values (torch.FlaotTensor of shape (batch_size, sequence_length), optional) Decoded audio values, obtained using the decoder part of Encodec. The EncodecModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from datasets import load_dataset >>> from transformers import AutoProcessor, EncodecModel >>> dataset = load_dataset("ashraq/esc50") >>> audio_sample = dataset["train"]["audio"][0]["array"] >>> model_id = "facebook/encodec_24khz" >>> model = EncodecModel.from_pretrained(model_id) >>> processor = AutoProcessor.from_pretrained(model_id) >>> inputs = processor(raw_audio=audio_sample, return_tensors="pt") >>> outputs = model(**inputs) >>> audio_codes = outputs.audio_codes >>> audio_values = outputs.audio_values
https://huggingface.co/docs/transformers/master/model_doc/dpt
DPT Overview The DPT model was proposed in Vision Transformers for Dense Prediction by René Ranftl, Alexey Bochkovskiy, Vladlen Koltun. DPT is a model that leverages the Vision Transformer (ViT) as backbone for dense prediction tasks like semantic segmentation and depth estimation. The abstract from the paper is the following: We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks. We assemble tokens from various stages of the vision transformer into image-like representations at various resolutions and progressively combine them into full-resolution predictions using a convolutional decoder. The transformer backbone processes representations at a constant and relatively high resolution and has a global receptive field at every stage. These properties allow the dense vision transformer to provide finer-grained and more globally coherent predictions when compared to fully-convolutional networks. Our experiments show that this architecture yields substantial improvements on dense prediction tasks, especially when a large amount of training data is available. For monocular depth estimation, we observe an improvement of up to 28% in relative performance when compared to a state-of-the-art fully-convolutional network. When applied to semantic segmentation, dense vision transformers set a new state of the art on ADE20K with 49.02% mIoU. We further show that the architecture can be fine-tuned on smaller datasets such as NYUv2, KITTI, and Pascal Context where it also sets the new state of the art. DPT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DPT. Demo notebooks for DPTForDepthEstimation can be found here. Semantic segmentation task guide Monocular depth estimation task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DPTConfig class transformers.DPTConfig < source > ( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-12 image_size = 384 patch_size = 16 num_channels = 3 is_hybrid = False qkv_bias = True backbone_out_indices = [2, 5, 8, 11] readout_type = 'project' reassemble_factors = [4, 2, 1, 0.5] neck_hidden_sizes = [96, 192, 384, 768] fusion_hidden_size = 256 head_in_index = -1 use_batch_norm_in_fusion_residual = False use_auxiliary_head = True auxiliary_loss_weight = 0.4 semantic_loss_ignore_index = 255 semantic_classifier_dropout = 0.1 backbone_featmap_shape = [1, 1024, 24, 24] neck_ignore_stages = [0, 1] backbone_config = None **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 384) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values. backbone_out_indices (List[int], optional, defaults to [2, 5, 8, 11]) — Indices of the intermediate hidden states to use from backbone. readout_type (str, optional, defaults to "project") — The readout type to use when processing the readout token (CLS token) of the intermediate hidden states of the ViT backbone. Can be one of ["ignore", "add", "project"]. “ignore” simply ignores the CLS token. “add” passes the information from the CLS token to all other tokens by adding the representations. “project” passes information to the other tokens by concatenating the readout to all other tokens before projecting the representation to the original feature dimension D using a linear layer followed by a GELU non-linearity. is_hybrid (bool, optional, defaults to False) — Whether to use a hybrid backbone. Useful in the context of loading DPT-Hybrid models. reassemble_factors (List[int], optional, defaults to [4, 2, 1, 0.5]) — The up/downsampling factors of the reassemble layers. neck_hidden_sizes (List[str], optional, defaults to [96, 192, 384, 768]) — The hidden sizes to project to for the feature maps of the backbone. fusion_hidden_size (int, optional, defaults to 256) — The number of channels before fusion. head_in_index (int, optional, defaults to -1) — The index of the features to use in the heads. use_batch_norm_in_fusion_residual (bool, optional, defaults to False) — Whether to use batch normalization in the pre-activate residual units of the fusion blocks. use_auxiliary_head (bool, optional, defaults to True) — Whether to use an auxiliary head during training. auxiliary_loss_weight (float, optional, defaults to 0.4) — Weight of the cross-entropy loss of the auxiliary head. semantic_loss_ignore_index (int, optional, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model. semantic_classifier_dropout (float, optional, defaults to 0.1) — The dropout ratio for the semantic classification head. backbone_featmap_shape (List[int], optional, defaults to [1, 1024, 24, 24]) — Used only for the hybrid embedding type. The shape of the feature maps of the backbone. neck_ignore_stages (List[int], optional, defaults to [0, 1]) — Used only for the hybrid embedding type. The stages of the readout layers to ignore. backbone_config (Union[Dict[str, Any], PretrainedConfig], optional) — Used only for the hybrid embedding type. The configuration of the backbone in a dictionary. This is the configuration class to store the configuration of a DPTModel. It is used to instantiate an DPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DPT Intel/dpt-large architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DPTModel, DPTConfig >>> >>> configuration = DPTConfig() >>> >>> model = DPTModel(configuration) >>> >>> configuration = model.config Serializes this instance to a Python dictionary. Override the default to_dict(). Returns: Dict[str, any]: Dictionary of all the attributes that make up this configuration instance, DPTFeatureExtractor Preprocess an image or a batch of images. ( outputs target_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation Parameters outputs (DPTForSemanticSegmentation) — Raw outputs of the model. target_sizes (List[Tuple] of length batch_size, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. List[torch.Tensor] of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id. Converts the output of DPTForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch. DPTImageProcessor class transformers.DPTImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> keep_aspect_ratio: bool = False ensure_multiple_of: int = 1 do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions. Can be overidden by do_resize in preprocess. size (Dict[str, int] optional, defaults to {"height" -- 384, "width": 384}): Size of the image after resizing. Can be overidden by size in preprocess. keep_aspect_ratio (bool, optional, defaults to False) — If True, the image is resized to the largest possible size such that the aspect ratio is preserved. Can be overidden by keep_aspect_ratio in preprocess. ensure_multiple_of (int, optional, defaults to 1) — If do_resize is True, the image is resized to a size that is a multiple of this value. Can be overidden by ensure_multiple_of in preprocess. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Defines the resampling filter to use if resizing the image. Can be overidden by resample in preprocess. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overidden by do_rescale in preprocess. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overidden by rescale_factor in preprocess. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Constructs a DPT image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: int = None keep_aspect_ratio: bool = None ensure_multiple_of: int = None resample: Resampling = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[transformers.image_utils.ChannelDimension, str, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after reszing. If keep_aspect_ratio is True, the image is resized to the largest possible size such that the aspect ratio is preserved. If ensure_multiple_of is set, the image is resized to a size that is a multiple of this value. keep_aspect_ratio (bool, optional, defaults to self.keep_aspect_ratio) — Whether to keep the aspect ratio of the image. If False, the image will be resized to (size, size). If True, the image will be resized to keep the aspect ratio and the size will be the maximum possible. ensure_multiple_of (int, optional, defaults to self.ensure_multiple_of) — Ensure that the image size is a multiple of this value. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only has an effect if do_resize is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: ChannelDimension.FIRST: image in (num_channels, height, width) format. ChannelDimension.LAST: image in (height, width, num_channels) format. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. post_process_semantic_segmentation < source > ( outputs target_sizes: typing.List[typing.Tuple] = None ) → semantic_segmentation Parameters outputs (DPTForSemanticSegmentation) — Raw outputs of the model. target_sizes (List[Tuple] of length batch_size, optional) — List of tuples corresponding to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. Returns semantic_segmentation List[torch.Tensor] of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id. Converts the output of DPTForSemanticSegmentation into semantic segmentation maps. Only supports PyTorch. DPTModel class transformers.DPTModel < source > ( config add_pooling_layer = True ) Parameters config (ViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor head_mask: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DPTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations or tuple(torch.FloatTensor) A transformers.models.dpt.modeling_dpt.BaseModelOutputWithPoolingAndIntermediateActivations or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPTConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. intermediate_activations (tuple(torch.FloatTensor), optional) — Intermediate activations that can be used to compute hidden states of the model at various layers. The DPTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, DPTModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large") >>> model = DPTModel.from_pretrained("Intel/dpt-large") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 577, 1024] DPTForDepthEstimation class transformers.DPTForDepthEstimation < source > ( config ) Parameters config (ViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DPT Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor head_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.DepthEstimatorOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DPTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, height, width), optional) — Ground truth depth estimation maps for computing the loss. A transformers.modeling_outputs.DepthEstimatorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. predicted_depth (torch.FloatTensor of shape (batch_size, height, width)) — Predicted depth for each pixel. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DPTForDepthEstimation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DPTForDepthEstimation >>> import torch >>> import numpy as np >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large") >>> model = DPTForDepthEstimation.from_pretrained("Intel/dpt-large") >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) ... predicted_depth = outputs.predicted_depth >>> >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ) >>> >>> output = prediction.squeeze().cpu().numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted) DPTForSemanticSegmentation class transformers.DPTForSemanticSegmentation < source > ( config ) Parameters config (ViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DPT Model with a semantic segmentation head on top e.g. for ADE20k, CityScapes. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DPTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, height, width), optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel. The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DPTForSemanticSegmentation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DPTForSemanticSegmentation >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("Intel/dpt-large-ade") >>> model = DPTForSemanticSegmentation.from_pretrained("Intel/dpt-large-ade") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/efficientformer
EfficientFormer Overview The EfficientFormer model was proposed in EfficientFormer: Vision Transformers at MobileNet Speed by Yanyu Li, Geng Yuan, Yang Wen, Eric Hu, Georgios Evangelidis, Sergey Tulyakov, Yanzhi Wang, Jian Ren. EfficientFormer proposes a dimension-consistent pure transformer that can be run on mobile devices for dense prediction tasks like image classification, object detection and semantic segmentation. The abstract from the paper is the following: Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, e.g., attention mechanism, ViT-based models are generally times slower than lightweight convolutional networks. Therefore, the deployment of ViT for real-time applications is particularly challenging, especially on resource-constrained hardware such as mobile devices. Recent efforts try to reduce the computation complexity of ViT through network architecture search or hybrid design with MobileNet block, yet the inference speed is still unsatisfactory. This leads to an important question: can transformers run as fast as MobileNet while obtaining high performance? To answer this, we first revisit the network architecture and operators used in ViT-based models and identify inefficient designs. Then we introduce a dimension-consistent pure transformer (without MobileNet blocks) as a design paradigm. Finally, we perform latency-driven slimming to get a series of final models dubbed EfficientFormer. Extensive experiments show the superiority of EfficientFormer in performance and speed on mobile devices. Our fastest model, EfficientFormer-L1, achieves 79.2% top-1 accuracy on ImageNet-1K with only 1.6 ms inference latency on iPhone 12 (compiled with CoreML), which { runs as fast as MobileNetV2×1.4 (1.6 ms, 74.7% top-1),} and our largest model, EfficientFormer-L7, obtains 83.3% accuracy with only 7.0 ms latency. Our work proves that properly designed transformers can reach extremely low latency on mobile devices while maintaining high performance. This model was contributed by novice03 and Bearnardd. The original code can be found here. The TensorFlow version of this model was added by D-Roberts. Documentation resources Image classification task guide EfficientFormerConfig class transformers.EfficientFormerConfig < source > ( depths: typing.List[int] = [3, 2, 6, 4] hidden_sizes: typing.List[int] = [48, 96, 224, 448] downsamples: typing.List[bool] = [True, True, True, True] dim: int = 448 key_dim: int = 32 attention_ratio: int = 4 resolution: int = 7 num_hidden_layers: int = 5 num_attention_heads: int = 8 mlp_expansion_ratio: int = 4 hidden_dropout_prob: float = 0.0 patch_size: int = 16 num_channels: int = 3 pool_size: int = 3 downsample_patch_size: int = 3 downsample_stride: int = 2 downsample_pad: int = 1 drop_path_rate: float = 0.0 num_meta3d_blocks: int = 1 distillation: bool = True use_layer_scale: bool = True layer_scale_init_value: float = 1e-05 hidden_act: str = 'gelu' initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 image_size: int = 224 batch_norm_eps: float = 1e-05 **kwargs ) Parameters depths (List(int), optional, defaults to [3, 2, 6, 4]) — Depth of each stage. hidden_sizes (List(int), optional, defaults to [48, 96, 224, 448]) — Dimensionality of each stage. downsamples (List(bool), optional, defaults to [True, True, True, True]) — Whether or not to downsample inputs between two stages. dim (int, optional, defaults to 448) — Number of channels in Meta3D layers key_dim (int, optional, defaults to 32) — The size of the key in meta3D block. attention_ratio (int, optional, defaults to 4) — Ratio of the dimension of the query and value to the dimension of the key in MSHA block resolution (int, optional, defaults to 7) — Size of each patch num_hidden_layers (int, optional, defaults to 5) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the 3D MetaBlock. mlp_expansion_ratio (int, optional, defaults to 4) — Ratio of size of the hidden dimensionality of an MLP to the dimensionality of its input. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings and encoder. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. pool_size (int, optional, defaults to 3) — Kernel size of pooling layers. downsample_patch_size (int, optional, defaults to 3) — The size of patches in downsampling layers. downsample_stride (int, optional, defaults to 2) — The stride of convolution kernels in downsampling layers. downsample_pad (int, optional, defaults to 1) — Padding in downsampling layers. drop_path_rate (int, optional, defaults to 0) — Rate at which to increase dropout probability in DropPath. num_meta3d_blocks (int, optional, defaults to 1) — The number of 3D MetaBlocks in the last stage. distillation (bool, optional, defaults to True) — Whether to add a distillation head. use_layer_scale (bool, optional, defaults to True) — Whether to scale outputs from token mixers. layer_scale_init_value (float, optional, defaults to 1e-5) — Factor by which outputs from token mixers are scaled. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 224) — The size (resolution) of each image. This is the configuration class to store the configuration of an EfficientFormerModel. It is used to instantiate an EfficientFormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the EfficientFormer snap-research/efficientformer-l1 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import EfficientFormerConfig, EfficientFormerModel >>> >>> configuration = EfficientFormerConfig() >>> >>> model = EfficientFormerModel(configuration) >>> >>> configuration = model.config EfficientFormerImageProcessor class transformers.EfficientFormerImageProcessor < source > ( do_resize: bool = True size: typing.Union[typing.Dict[str, int], NoneType] = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 crop_size: typing.Dict[str, int] = None do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified (size["height"], size["width"]). Can be overridden by the do_resize parameter in the preprocess method. size (dict, optional, defaults to {"height" -- 224, "width": 224}): Size of the output image after resizing. Can be overridden by the size parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. Can be overridden by the resample parameter in the preprocess method. do_center_crop (bool, optional, defaults to True) — Whether to center crop the image to the specified crop_size. Can be overridden by do_center_crop in the preprocess method. crop_size (Dict[str, int] optional, defaults to 224) — Size of the output image after applying center_crop. Can be overridden by crop_size in the preprocess method. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Constructs a EfficientFormer image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Dict[str, int] = None resample: Resampling = None do_center_crop: bool = None crop_size: int = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Dictionary in the format {"height": h, "width": w} specifying the size of the output image after resizing. resample (PILImageResampling filter, optional, defaults to self.resample) — PILImageResampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the center crop. Only has an effect if do_center_crop is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to use if do_normalize is set to True. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to use if do_normalize is set to True. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. EfficientFormerModel class transformers.EfficientFormerModel < source > ( config: EfficientFormerConfig ) Parameters config (EfficientFormerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare EfficientFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using ViTImageProcessor. See ViTImageProcessor.preprocess() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientFormerConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The EfficientFormerModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, EfficientFormerModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") >>> model = EfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 49, 448] EfficientFormerForImageClassification class transformers.EfficientFormerForImageClassification < source > ( config: EfficientFormerConfig ) Parameters config (EfficientFormerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. EfficientFormer Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using ViTImageProcessor. See ViTImageProcessor.preprocess() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientFormerConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The EfficientFormerForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, EfficientFormerForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") >>> model = EfficientFormerForImageClassification.from_pretrained("snap-research/efficientformer-l1-300") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) Egyptian cat EfficientFormerForImageClassificationWithTeacher class transformers.EfficientFormerForImageClassificationWithTeacher < source > ( config: EfficientFormerConfig ) Parameters config (EfficientFormerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. EfficientFormer Model transformer with image classification heads on top (a linear layer on top of the final hidden state of the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported. This model is a PyTorch nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.efficientformer.modeling_efficientformer.EfficientFormerForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using ViTImageProcessor. See ViTImageProcessor.preprocess() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.efficientformer.modeling_efficientformer.EfficientFormerForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor) A transformers.models.efficientformer.modeling_efficientformer.EfficientFormerForImageClassificationWithTeacherOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientFormerConfig) and inputs. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits. cls_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the class token). distillation_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the distillation token). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The EfficientFormerForImageClassificationWithTeacher forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, EfficientFormerForImageClassificationWithTeacher >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") >>> model = EfficientFormerForImageClassificationWithTeacher.from_pretrained("snap-research/efficientformer-l1-300") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) Egyptian cat TFEfficientFormerModel class transformers.TFEfficientFormerModel < source > ( *args **kwargs ) Parameters config (EfficientFormerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare EfficientFormer Model transformer outputting raw hidden-states without any specific head on top. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: typing.Optional[tensorflow.python.framework.ops.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values ((tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See EfficientFormerImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientFormerConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFEfficientFormerModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, TFEfficientFormerModel >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") >>> model = TFEfficientFormerModel.from_pretrained("snap-research/efficientformer-l1-300") >>> inputs = image_processor(image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 49, 448] TFEfficientFormerForImageClassification class transformers.TFEfficientFormerForImageClassification < source > ( *args **kwargs ) Parameters config (EfficientFormerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. EfficientFormer Model transformer with an image classification head on top of pooled last hidden state, e.g. for ImageNet. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: typing.Optional[tensorflow.python.framework.ops.Tensor] = None labels: typing.Optional[tensorflow.python.framework.ops.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor) Parameters pixel_values ((tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See EfficientFormerImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor) A transformers.modeling_tf_outputs.TFImageClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientFormerConfig) and inputs. loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFEfficientFormerForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, TFEfficientFormerForImageClassification >>> import tensorflow as tf >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") >>> model = TFEfficientFormerForImageClassification.from_pretrained("snap-research/efficientformer-l1-300") >>> inputs = image_processor(image, return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> predicted_label = int(tf.math.argmax(logits, axis=-1)) >>> print(model.config.id2label[predicted_label]) LABEL_281 TFEfficientFormerForImageClassificationWithTeacher class transformers.TFEfficientFormerForImageClassificationWithTeacher < source > ( *args **kwargs ) Parameters config (EfficientFormerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. EfficientFormer Model transformer with image classification heads on top (a linear layer on top of the final hidden state and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning:: This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: typing.Optional[tensorflow.python.framework.ops.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None training: bool = False ) → transformers.models.efficientformer.modeling_tf_efficientformer.TFEfficientFormerForImageClassificationWithTeacherOutput or tuple(tf.Tensor) Parameters pixel_values ((tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See EfficientFormerImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.efficientformer.modeling_tf_efficientformer.TFEfficientFormerForImageClassificationWithTeacherOutput or tuple(tf.Tensor) A transformers.models.efficientformer.modeling_tf_efficientformer.TFEfficientFormerForImageClassificationWithTeacherOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientFormerConfig) and inputs. The TFEfficientFormerForImageClassificationWithTeacher forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Output type of EfficientFormerForImageClassificationWithTeacher. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits. cls_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the class token). distillation_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the distillation token). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Example: >>> from transformers import AutoImageProcessor, TFEfficientFormerForImageClassificationWithTeacher >>> import tensorflow as tf >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("snap-research/efficientformer-l1-300") >>> model = TFEfficientFormerForImageClassificationWithTeacher.from_pretrained("snap-research/efficientformer-l1-300") >>> inputs = image_processor(image, return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> predicted_label = int(tf.math.argmax(logits, axis=-1)) >>> print(model.config.id2label[predicted_label]) LABEL_281
https://huggingface.co/docs/transformers/model_doc/efficientnet
EfficientNet Overview The EfficientNet model was proposed in EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks by Mingxing Tan and Quoc V. Le. EfficientNets are a family of image classification models, which achieve state-of-the-art accuracy, yet being an order-of-magnitude smaller and faster than previous models. The abstract from the paper is the following: Convolutional Neural Networks (ConvNets) are commonly developed at a fixed resource budget, and then scaled up for better accuracy if more resources are available. In this paper, we systematically study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. Based on this observation, we propose a new scaling method that uniformly scales all dimensions of depth/width/resolution using a simple yet highly effective compound coefficient. We demonstrate the effectiveness of this method on scaling up MobileNets and ResNet. To go even further, we use neural architecture search to design a new baseline network and scale it up to obtain a family of models, called EfficientNets, which achieve much better accuracy and efficiency than previous ConvNets. In particular, our EfficientNet-B7 achieves state-of-the-art 84.3% top-1 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. Our EfficientNets also transfer well and achieve state-of-the-art accuracy on CIFAR-100 (91.7%), Flowers (98.8%), and 3 other transfer learning datasets, with an order of magnitude fewer parameters. This model was contributed by adirik. The original code can be found here. EfficientNetConfig class transformers.EfficientNetConfig < source > ( num_channels: int = 3 image_size: int = 600 width_coefficient: float = 2.0 depth_coefficient: float = 3.1 depth_divisor: int = 8 kernel_sizes: typing.List[int] = [3, 3, 5, 3, 5, 5, 3] in_channels: typing.List[int] = [32, 16, 24, 40, 80, 112, 192] out_channels: typing.List[int] = [16, 24, 40, 80, 112, 192, 320] depthwise_padding: typing.List[int] = [] strides: typing.List[int] = [1, 2, 2, 2, 1, 2, 1] num_block_repeats: typing.List[int] = [1, 2, 2, 3, 3, 4, 1] expand_ratios: typing.List[int] = [1, 6, 6, 6, 6, 6, 6] squeeze_expansion_ratio: float = 0.25 hidden_act: str = 'swish' hidden_dim: int = 2560 pooling_type: str = 'mean' initializer_range: float = 0.02 batch_norm_eps: float = 0.001 batch_norm_momentum: float = 0.99 dropout_rate: float = 0.5 drop_connect_rate: float = 0.2 **kwargs ) Parameters num_channels (int, optional, defaults to 3) — The number of input channels. image_size (int, optional, defaults to 600) — The input image size. width_coefficient (float, optional, defaults to 2.0) — Scaling coefficient for network width at each stage. depth_coefficient (float, optional, defaults to 3.1) — Scaling coefficient for network depth at each stage. depth_divisor int, optional, defaults to 8) — A unit of network width. kernel_sizes (List[int], optional, defaults to [3, 3, 5, 3, 5, 5, 3]) — List of kernel sizes to be used in each block. in_channels (List[int], optional, defaults to [32, 16, 24, 40, 80, 112, 192]) — List of input channel sizes to be used in each block for convolutional layers. out_channels (List[int], optional, defaults to [16, 24, 40, 80, 112, 192, 320]) — List of output channel sizes to be used in each block for convolutional layers. depthwise_padding (List[int], optional, defaults to []) — List of block indices with square padding. strides (List[int], optional, defaults to [1, 2, 2, 2, 1, 2, 1]) — List of stride sizes to be used in each block for convolutional layers. num_block_repeats (List[int], optional, defaults to [1, 2, 2, 3, 3, 4, 1]) — List of the number of times each block is to repeated. expand_ratios (List[int], optional, defaults to [1, 6, 6, 6, 6, 6, 6]) — List of scaling coefficient of each block. squeeze_expansion_ratio (float, optional, defaults to 0.25) — Squeeze expansion ratio. hidden_act (str or function, optional, defaults to "silu") — The non-linear activation function (function or string) in each block. If string, "gelu", "relu", "selu", “gelu_new”, “silu”and“mish”` are supported. hiddem_dim (int, optional, defaults to 1280) — The hidden dimension of the layer before the classification head. pooling_type (str or function, optional, defaults to "mean") — Type of final pooling to be applied before the dense classification head. Available options are ["mean", "max"] initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. batch_norm_eps (float, optional, defaults to 1e-3) — The epsilon used by the batch normalization layers. batch_norm_momentum (float, optional, defaults to 0.99) — The momentum used by the batch normalization layers. dropout_rate (float, optional, defaults to 0.5) — The dropout rate to be applied before final classifier layer. drop_connect_rate (float, optional, defaults to 0.2) — The drop rate for skip connections. This is the configuration class to store the configuration of a EfficientNetModel. It is used to instantiate an EfficientNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the EfficientNet google/efficientnet-b7 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import EfficientNetConfig, EfficientNetModel >>> >>> configuration = EfficientNetConfig() >>> >>> model = EfficientNetModel(configuration) >>> >>> configuration = model.config EfficientNetImageProcessor class transformers.EfficientNetImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = 0 do_center_crop: bool = False crop_size: typing.Dict[str, int] = None rescale_factor: typing.Union[int, float] = 0.00392156862745098 rescale_offset: bool = False do_rescale: bool = True do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None include_top: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by do_resize in preprocess. size (Dict[str, int] optional, defaults to {"height" -- 346, "width": 346}): Size of the image after resize. Can be overridden by size in preprocess. resample (PILImageResampling filter, optional, defaults to PILImageResampling.NEAREST) — Resampling filter to use if resizing the image. Can be overridden by resample in preprocess. do_center_crop (bool, optional, defaults to False) — Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped. Can be overridden by do_center_crop in preprocess. crop_size (Dict[str, int], optional, defaults to {"height" -- 289, "width": 289}): Desired output size when applying center-cropping. Can be overridden by crop_size in preprocess. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. rescale_offset (bool, optional, defaults to False) — Whether to rescale the image between [-scale_range, scale_range] instead of [0, scale_range]. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. include_top (bool, optional, defaults to True) — Whether to rescale the image again. Should be set to True if the inputs are used for image classification. Constructs a EfficientNet image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None resample = None do_center_crop: bool = None crop_size: typing.Dict[str, int] = None do_rescale: bool = None rescale_factor: float = None rescale_offset: bool = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None include_top: bool = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resize. resample (PILImageResampling, optional, defaults to self.resample) — PILImageResampling filter to use if resizing the image Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the image after center crop. If one edge the image is smaller than crop_size, it will be padded with zeros and then cropped do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. rescale_offset (bool, optional, defaults to self.rescale_offset) — Whether to rescale the image between [-scale_range, scale_range] instead of [0, scale_range]. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation. include_top (bool, optional, defaults to self.include_top) — Rescales the image again for image classification if set to True. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: None: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: ChannelDimension.FIRST: image in (num_channels, height, width) format. ChannelDimension.LAST: image in (height, width, num_channels) format. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. EfficientNetModel class transformers.EfficientNetModel < source > ( config: EfficientNetConfig ) Parameters config (EfficientNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare EfficientNet model outputting raw features without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientNetConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. The EfficientNetModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, EfficientNetModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("google/efficientnet-b7") >>> model = EfficientNetModel.from_pretrained("google/efficientnet-b7") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 768, 7, 7] EfficientNetForImageClassification class transformers.EfficientNetForImageClassification < source > ( config ) Parameters config (EfficientNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. EfficientNet Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EfficientNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. The EfficientNetForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, EfficientNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("google/efficientnet-b7") >>> model = EfficientNetForImageClassification.from_pretrained("google/efficientnet-b7") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat
https://huggingface.co/docs/transformers/main_classes/optimizer_schedules
Optimization The .optimization module provides: an optimizer with weight decay fixed that can be used to fine-tuned models, and several schedules in the form of schedule objects that inherit from _LRSchedule: a gradient accumulation class to accumulate the gradients of multiple batches AdamW (PyTorch) class transformers.AdamW < source > ( params: typing.Iterable[torch.nn.parameter.Parameter] lr: float = 0.001 betas: typing.Tuple[float, float] = (0.9, 0.999) eps: float = 1e-06 weight_decay: float = 0.0 correct_bias: bool = True no_deprecation_warning: bool = False ) Parameters params (Iterable[nn.parameter.Parameter]) — Iterable of parameters to optimize or dictionaries defining parameter groups. lr (float, optional, defaults to 1e-3) — The learning rate to use. betas (Tuple[float,float], optional, defaults to (0.9, 0.999)) — Adam’s betas parameters (b1, b2). eps (float, optional, defaults to 1e-6) — Adam’s epsilon for numerical stability. weight_decay (float, optional, defaults to 0) — Decoupled weight decay to apply. correct_bias (bool, optional, defaults to True) — Whether or not to correct bias in Adam (for instance, in Bert TF repository they use False). no_deprecation_warning (bool, optional, defaults to False) — A flag used to disable the deprecation warning (set to True to disable the warning). Implements Adam algorithm with weight decay fix as introduced in Decoupled Weight Decay Regularization. step < source > ( closure: typing.Callable = None ) Parameters closure (Callable, optional) — A closure that reevaluates the model and returns the loss. Performs a single optimization step. AdaFactor (PyTorch) class transformers.Adafactor < source > ( params lr = None eps = (1e-30, 0.001) clip_threshold = 1.0 decay_rate = -0.8 beta1 = None weight_decay = 0.0 scale_parameter = True relative_step = True warmup_init = False ) Parameters params (Iterable[nn.parameter.Parameter]) — Iterable of parameters to optimize or dictionaries defining parameter groups. lr (float, optional) — The external learning rate. eps (Tuple[float, float], optional, defaults to (1e-30, 1e-3)) — Regularization constants for square gradient and parameter scale respectively clip_threshold (float, optional, defaults 1.0) — Threshold of root mean square of final gradient update decay_rate (float, optional, defaults to -0.8) — Coefficient used to compute running averages of square beta1 (float, optional) — Coefficient used for computing running averages of gradient weight_decay (float, optional, defaults to 0) — Weight decay (L2 penalty) scale_parameter (bool, optional, defaults to True) — If True, learning rate is scaled by root mean square relative_step (bool, optional, defaults to True) — If True, time-dependent learning rate is computed instead of external learning rate warmup_init (bool, optional, defaults to False) — Time-dependent learning rate computation depends on whether warm-up initialization is being used AdaFactor pytorch implementation can be used as a drop in replacement for Adam original fairseq code: https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py Paper: Adafactor: Adaptive Learning Rates with Sublinear Memory Cost https://arxiv.org/abs/1804.04235 Note that this optimizer internally adjusts the learning rate depending on the scale_parameter, relative_step and warmup_init options. To use a manual (external) learning rate schedule you should set scale_parameter=False and relative_step=False. This implementation handles low-precision (FP16, bfloat) values, but we have not thoroughly tested. Recommended T5 finetuning settings (https://discuss.huggingface.co/t/t5-finetuning-tips/684/3): Training without LR warmup or clip_threshold is not recommended. use scheduled LR warm-up to fixed LR use clip_threshold=1.0 (https://arxiv.org/abs/1804.04235) Disable relative updates Use scale_parameter=False Additional optimizer operations like gradient clipping should not be used alongside Adafactor Example: Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) Others reported the following combination to work well: Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) When using lr=None with Trainer you will most likely need to use AdafactorSchedule scheduler as following: from transformers.optimization import Adafactor, AdafactorSchedule optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) lr_scheduler = AdafactorSchedule(optimizer) trainer = Trainer(..., optimizers=(optimizer, lr_scheduler)) Usage: optimizer = Adafactor( model.parameters(), lr=1e-3, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=False, scale_parameter=False, warmup_init=False, ) step < source > ( closure = None ) Parameters closure (callable, optional) — A closure that reevaluates the model and returns the loss. Performs a single optimization step AdamWeightDecay (TensorFlow) class transformers.AdamWeightDecay < source > ( learning_rate: typing.Union[float, keras.src.optimizers.schedules.learning_rate_schedule.LearningRateSchedule] = 0.001 beta_1: float = 0.9 beta_2: float = 0.999 epsilon: float = 1e-07 amsgrad: bool = False weight_decay_rate: float = 0.0 include_in_weight_decay: typing.Optional[typing.List[str]] = None exclude_from_weight_decay: typing.Optional[typing.List[str]] = None name: str = 'AdamWeightDecay' **kwargs ) Parameters learning_rate (Union[float, tf.keras.optimizers.schedules.LearningRateSchedule], optional, defaults to 1e-3) — The learning rate to use or a schedule. beta_1 (float, optional, defaults to 0.9) — The beta1 parameter in Adam, which is the exponential decay rate for the 1st momentum estimates. beta_2 (float, optional, defaults to 0.999) — The beta2 parameter in Adam, which is the exponential decay rate for the 2nd momentum estimates. epsilon (float, optional, defaults to 1e-7) — The epsilon parameter in Adam, which is a small constant for numerical stability. amsgrad (bool, optional, default to False) — Whether to apply AMSGrad variant of this algorithm or not, see On the Convergence of Adam and Beyond. weight_decay_rate (float, optional, defaults to 0) — The weight decay to apply. include_in_weight_decay (List[str], optional) — List of the parameter names (or re patterns) to apply weight decay to. If none is passed, weight decay is applied to all parameters by default (unless they are in exclude_from_weight_decay). exclude_from_weight_decay (List[str], optional) — List of the parameter names (or re patterns) to exclude from applying weight decay to. If a include_in_weight_decay is passed, the names in it will supersede this list. name (str, optional, defaults to ‘AdamWeightDecay’) — Optional name for the operations created when applying gradients. kwargs (Dict[str, Any], optional) — Keyword arguments. Allowed to be {clipnorm, clipvalue, lr, decay}. clipnorm is clip gradients by norm; clipvalue is clip gradients by value, decay is included for backward compatibility to allow time inverse decay of learning rate. lr is included for backward compatibility, recommended to use learning_rate instead. Adam enables L2 weight decay and clip_by_global_norm on gradients. Just adding the square of the weights to the loss function is not the correct way of using L2 regularization/weight decay with Adam, since that will interact with the m and v parameters in strange ways as shown in Decoupled Weight Decay Regularization. Instead we want to decay the weights in a manner that doesn’t interact with the m/v parameters. This is equivalent to adding the square of the weights to the loss with plain (non-momentum) SGD. Creates an optimizer from its config with WarmUp custom object. transformers.create_optimizer < source > ( init_lr: float num_train_steps: int num_warmup_steps: int min_lr_ratio: float = 0.0 adam_beta1: float = 0.9 adam_beta2: float = 0.999 adam_epsilon: float = 1e-08 adam_clipnorm: typing.Optional[float] = None adam_global_clipnorm: typing.Optional[float] = None weight_decay_rate: float = 0.0 power: float = 1.0 include_in_weight_decay: typing.Optional[typing.List[str]] = None ) Parameters init_lr (float) — The desired learning rate at the end of the warmup phase. num_train_steps (int) — The total number of training steps. num_warmup_steps (int) — The number of warmup steps. min_lr_ratio (float, optional, defaults to 0) — The final learning rate at the end of the linear decay will be init_lr * min_lr_ratio. adam_beta1 (float, optional, defaults to 0.9) — The beta1 to use in Adam. adam_beta2 (float, optional, defaults to 0.999) — The beta2 to use in Adam. adam_epsilon (float, optional, defaults to 1e-8) — The epsilon to use in Adam. adam_clipnorm (float, optional, defaults to None) — If not None, clip the gradient norm for each weight tensor to this value. adam_global_clipnorm (float, optional, defaults to None) — If not None, clip gradient norm to this value. When using this argument, the norm is computed over all weight tensors, as if they were concatenated into a single vector. weight_decay_rate (float, optional, defaults to 0) — The weight decay to use. power (float, optional, defaults to 1.0) — The power to use for PolynomialDecay. include_in_weight_decay (List[str], optional) — List of the parameter names (or re patterns) to apply weight decay to. If none is passed, weight decay is applied to all parameters except bias and layer norm parameters. Creates an optimizer with a learning rate schedule using a warmup phase followed by a linear decay. Schedules Learning Rate Schedules (Pytorch) class transformers.SchedulerType < source > ( value names = None module = None qualname = None type = None start = 1 ) An enumeration. transformers.get_scheduler < source > ( name: typing.Union[str, transformers.trainer_utils.SchedulerType] optimizer: Optimizer num_warmup_steps: typing.Optional[int] = None num_training_steps: typing.Optional[int] = None ) Parameters name (str or SchedulerType) — The name of the scheduler to use. optimizer (torch.optim.Optimizer) — The optimizer that will be used during training. num_warmup_steps (int, optional) — The number of warmup steps to do. This is not required by all schedulers (hence the argument being optional), the function will raise an error if it’s unset and the scheduler type requires it. num_training_steps (`int“, optional) — The number of training steps to do. This is not required by all schedulers (hence the argument being optional), the function will raise an error if it’s unset and the scheduler type requires it. Unified API to get any scheduler from its name. transformers.get_constant_schedule < source > ( optimizer: Optimizer last_epoch: int = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with a constant learning rate, using the learning rate set in optimizer. transformers.get_constant_schedule_with_warmup < source > ( optimizer: Optimizer num_warmup_steps: int last_epoch: int = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. num_warmup_steps (int) — The number of steps for the warmup phase. last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with a constant learning rate preceded by a warmup period during which the learning rate increases linearly between 0 and the initial lr set in the optimizer. transformers.get_cosine_schedule_with_warmup < source > ( optimizer: Optimizer num_warmup_steps: int num_training_steps: int num_cycles: float = 0.5 last_epoch: int = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. num_warmup_steps (int) — The number of steps for the warmup phase. num_training_steps (int) — The total number of training steps. num_cycles (float, optional, defaults to 0.5) — The number of waves in the cosine schedule (the defaults is to just decrease from the max value to 0 following a half-cosine). last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. transformers.get_cosine_with_hard_restarts_schedule_with_warmup < source > ( optimizer: Optimizer num_warmup_steps: int num_training_steps: int num_cycles: int = 1 last_epoch: int = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. num_warmup_steps (int) — The number of steps for the warmup phase. num_training_steps (int) — The total number of training steps. num_cycles (int, optional, defaults to 1) — The number of hard restarts to use. last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with a learning rate that decreases following the values of the cosine function between the initial lr set in the optimizer to 0, with several hard restarts, after a warmup period during which it increases linearly between 0 and the initial lr set in the optimizer. transformers.get_linear_schedule_with_warmup < source > ( optimizer num_warmup_steps num_training_steps last_epoch = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. num_warmup_steps (int) — The number of steps for the warmup phase. num_training_steps (int) — The total number of training steps. last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with a learning rate that decreases linearly from the initial lr set in the optimizer to 0, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. transformers.get_polynomial_decay_schedule_with_warmup < source > ( optimizer num_warmup_steps num_training_steps lr_end = 1e-07 power = 1.0 last_epoch = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. num_warmup_steps (int) — The number of steps for the warmup phase. num_training_steps (int) — The total number of training steps. lr_end (float, optional, defaults to 1e-7) — The end LR. power (float, optional, defaults to 1.0) — Power factor. last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with a learning rate that decreases as a polynomial decay from the initial lr set in the optimizer to end lr defined by lr_end, after a warmup period during which it increases linearly from 0 to the initial lr set in the optimizer. Note: power defaults to 1.0 as in the fairseq implementation, which in turn is based on the original BERT implementation at https://github.com/google-research/bert/blob/f39e881b169b9d53bea03d2d341b31707a6c052b/optimization.py#L37 transformers.get_inverse_sqrt_schedule < source > ( optimizer: Optimizer num_warmup_steps: int timescale: int = None last_epoch: int = -1 ) Parameters optimizer (~torch.optim.Optimizer) — The optimizer for which to schedule the learning rate. num_warmup_steps (int) — The number of steps for the warmup phase. timescale (int, optional, defaults to num_warmup_steps) — Time scale. last_epoch (int, optional, defaults to -1) — The index of the last epoch when resuming training. Create a schedule with an inverse square-root learning rate, from the initial lr set in the optimizer, after a warmup period which increases lr linearly from 0 to the initial lr set in the optimizer. Warmup (TensorFlow) class transformers.WarmUp < source > ( initial_learning_rate: float decay_schedule_fn: typing.Callable warmup_steps: int power: float = 1.0 name: str = None ) Parameters initial_learning_rate (float) — The initial learning rate for the schedule after the warmup (so this will be the learning rate at the end of the warmup). decay_schedule_fn (Callable) — The schedule function to apply after the warmup for the rest of training. warmup_steps (int) — The number of steps for the warmup part of training. power (float, optional, defaults to 1) — The power to use for the polynomial warmup (defaults is a linear warmup). name (str, optional) — Optional name prefix for the returned tensors during the schedule. Applies a warmup schedule on a given learning rate decay schedule. Gradient Strategies GradientAccumulator (TensorFlow) class transformers.GradientAccumulator < source > ( ) Gradient accumulation utility. When used with a distribution strategy, the accumulator should be called in a replica context. Gradients will be accumulated locally on each replica and without synchronization. Users should then call .gradients, scale the gradients if required, and pass the result to apply_gradients. Resets the accumulated gradients on the current replica.
https://huggingface.co/docs/transformers/model_doc/encoder-decoder
Encoder Decoder Models Overview The EncoderDecoderModel can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. After such an EncoderDecoderModel has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). An application of this architecture could be to leverage two pretrained BertModel as the encoder and decoder for a summarization model as was shown in: Text Summarization with Pretrained Encoders by Yang Liu and Mirella Lapata. Randomly initializing EncoderDecoderModel from model configurations. EncoderDecoderModel can be randomly initialized from an encoder and a decoder config. In the following example, we show how to do this using the default BertModel configuration for the encoder and the default BertForCausalLM configuration for the decoder. >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel >>> config_encoder = BertConfig() >>> config_decoder = BertConfig() >>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> model = EncoderDecoderModel(config=config) Initialising EncoderDecoderModel from a pretrained encoder and a pretrained decoder. EncoderDecoderModel can be initialized from a pretrained encoder checkpoint and a pretrained decoder checkpoint. Note that any pretrained auto-encoding model, e.g. BERT, can serve as the encoder and both pretrained auto-encoding models, e.g. BERT, pretrained causal language models, e.g. GPT2, as well as the pretrained decoder part of sequence-to-sequence models, e.g. decoder of BART, can be used as the decoder. Depending on which architecture you choose as the decoder, the cross-attention layers might be randomly initialized. Initializing EncoderDecoderModel from a pretrained encoder and decoder checkpoint requires the model to be fine-tuned on a downstream task, as has been shown in the Warm-starting-encoder-decoder blog post. To do so, the EncoderDecoderModel class provides a EncoderDecoderModel.from_encoder_decoder_pretrained() method. >>> from transformers import EncoderDecoderModel, BertTokenizer >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") Loading an existing EncoderDecoderModel checkpoint and perform inference. To load fine-tuned checkpoints of the EncoderDecoderModel class, EncoderDecoderModel provides the from_pretrained(...) method just like any other model architecture in Transformers. To perform inference, one uses the generate method, which allows to autoregressively generate text. This method supports various forms of decoding, such as greedy, beam search and multinomial sampling. >>> from transformers import AutoTokenizer, EncoderDecoderModel >>> >>> model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") >>> tokenizer = AutoTokenizer.from_pretrained("patrickvonplaten/bert2bert_cnn_daily_mail") >>> >>> ARTICLE_TO_SUMMARIZE = ( ... "PG&E stated it scheduled the blackouts in response to forecasts for high winds " ... "amid dry conditions. The aim is to reduce the risk of wildfires. Nearly 800 thousand customers were " ... "scheduled to be affected by the shutoffs which were expected to last through at least midday tomorrow." ... ) >>> input_ids = tokenizer(ARTICLE_TO_SUMMARIZE, return_tensors="pt").input_ids >>> >>> generated_ids = model.generate(input_ids) >>> generated_text = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_text) nearly 800 thousand customers were affected by the shutoffs. the aim is to reduce the risk of wildfires. nearly 800, 000 customers were expected to be affected by high winds amid dry conditions. pg & e said it scheduled the blackouts to last through at least midday tomorrow. Loading a PyTorch checkpoint into TFEncoderDecoderModel. TFEncoderDecoderModel.from_pretrained() currently doesn’t support initializing the model from a pytorch checkpoint. Passing from_pt=True to this method will throw an exception. If there are only pytorch checkpoints for a particular encoder-decoder model, a workaround is: >>> >>> from transformers import EncoderDecoderModel, TFEncoderDecoderModel >>> _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") >>> _model.encoder.save_pretrained("./encoder") >>> _model.decoder.save_pretrained("./decoder") >>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( ... "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ... ) >>> >>> model.config = _model.config Training Once the model is created, it can be fine-tuned similar to BART, T5 or any other encoder-decoder model. As you can see, only 2 inputs are required for the model in order to compute a loss: input_ids (which are the input_ids of the encoded input sequence) and labels (which are the input_ids of the encoded target sequence). >>> from transformers import BertTokenizer, EncoderDecoderModel >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> input_ids = tokenizer( ... "The tower is 324 metres (1,063 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side.During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest man-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure to reach a height of 300 metres. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft).Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.", ... return_tensors="pt", ... ).input_ids >>> labels = tokenizer( ... "the eiffel tower surpassed the washington monument to become the tallest structure in the world. it was the first structure to reach a height of 300 metres in paris in 1930. it is now taller than the chrysler building by 5. 2 metres ( 17 ft ) and is the second tallest free - standing structure in paris.", ... return_tensors="pt", ... ).input_ids >>> >>> loss = model(input_ids=input_ids, labels=labels).loss Detailed colab for training. This model was contributed by thomwolf. This model’s TensorFlow and Flax versions were contributed by ydshieh. EncoderDecoderConfig class transformers.EncoderDecoderConfig < source > ( **kwargs ) Parameters kwargs (optional) — Dictionary of keyword arguments. Notably: encoder (PretrainedConfig, optional) — An instance of a configuration object that defines the encoder config. decoder (PretrainedConfig, optional) — An instance of a configuration object that defines the decoder config. EncoderDecoderConfig is the configuration class to store the configuration of a EncoderDecoderModel. It is used to instantiate an Encoder Decoder model according to the specified arguments, defining the encoder and decoder configs. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import BertConfig, EncoderDecoderConfig, EncoderDecoderModel >>> >>> config_encoder = BertConfig() >>> config_decoder = BertConfig() >>> config = EncoderDecoderConfig.from_encoder_decoder_configs(config_encoder, config_decoder) >>> >>> model = EncoderDecoderModel(config=config) >>> >>> config_encoder = model.config.encoder >>> config_decoder = model.config.decoder >>> >>> config_decoder.is_decoder = True >>> config_decoder.add_cross_attention = True >>> >>> model.save_pretrained("my-model") >>> >>> encoder_decoder_config = EncoderDecoderConfig.from_pretrained("my-model") >>> model = EncoderDecoderModel.from_pretrained("my-model", config=encoder_decoder_config) from_encoder_decoder_configs < source > ( encoder_config: PretrainedConfig decoder_config: PretrainedConfig **kwargs ) → EncoderDecoderConfig An instance of a configuration object Instantiate a EncoderDecoderConfig (or a derived class) from a pre-trained encoder model configuration and decoder model configuration. EncoderDecoderModel class transformers.EncoderDecoderModel < source > ( config: typing.Optional[transformers.configuration_utils.PretrainedConfig] = None encoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None decoder: typing.Optional[transformers.modeling_utils.PreTrainedModel] = None ) Parameters config (EncoderDecoderConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. EncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the :meth~transformers.AutoModel.from_pretrained class method for the encoder and :meth~transformers.AutoModelForCausalLM.from_pretrained class method for the decoder. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None decoder_input_ids: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.BoolTensor] = None encoder_outputs: typing.Optional[typing.Tuple[torch.FloatTensor]] = None past_key_values: typing.Tuple[typing.Tuple[torch.FloatTensor]] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.modeling_outputs.Seq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). For training, decoder_input_ids are automatically created by the model by shifting the labels to the right, replacing -100 by the pad_token_id and prepending them with the decoder_start_token_id. decoder_attention_mask (torch.BoolTensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. encoder_outputs (tuple(torch.FloatTensor), optional) — This tuple must consist of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss for the decoder. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — If set to True, the model will return a ~utils.Seq2SeqLMOutput instead of a plain tuple. kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors: Without a prefix which will be input as **encoder_kwargs for the encoder forward function. With a decoder_ prefix which will be input as **decoder_kwargs for the decoder forward function. A transformers.modeling_outputs.Seq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EncoderDecoderConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The EncoderDecoderModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import EncoderDecoderModel, BertTokenizer >>> import torch >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained( ... "bert-base-uncased", "bert-base-uncased" ... ) >>> >>> model.config.decoder_start_token_id = tokenizer.cls_token_id >>> model.config.pad_token_id = tokenizer.pad_token_id >>> model.config.vocab_size = model.config.decoder.vocab_size >>> input_ids = tokenizer("This is a really long text", return_tensors="pt").input_ids >>> labels = tokenizer("This is the corresponding summary", return_tensors="pt").input_ids >>> outputs = model(input_ids=input_ids, labels=labels) >>> loss, logits = outputs.loss, outputs.logits >>> >>> model.save_pretrained("bert2bert") >>> model = EncoderDecoderModel.from_pretrained("bert2bert") >>> >>> generated = model.generate(input_ids) from_encoder_decoder_pretrained < source > ( encoder_pretrained_model_name_or_path: str = None decoder_pretrained_model_name_or_path: str = None *model_args **kwargs ) Parameters encoder_pretrained_model_name_or_path (str, optional) — Information necessary to initiate the encoder. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. decoder_pretrained_model_name_or_path (str, optional, defaults to None) — Information necessary to initiate the decoder. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (remaining positional arguments, optional) — All remaining positional arguments will be passed to the underlying model’s __init__ method. kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). To update the encoder configuration, use the prefix encoder_ for each configuration parameter. To update the decoder configuration, use the prefix decoder_ for each configuration parameter. To update the parent model configuration, do not use a prefix for each configuration parameter. Behaves differently depending on whether a config is provided or automatically loaded. Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. The model is set in evaluation mode by default using model.eval() (Dropout modules are deactivated). To train the model, you need to first set it back in training mode with model.train(). Example: >>> from transformers import EncoderDecoderModel >>> >>> model = EncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "bert-base-uncased") >>> >>> model.save_pretrained("./bert2bert") >>> >>> model = EncoderDecoderModel.from_pretrained("./bert2bert") TFEncoderDecoderModel class transformers.TFEncoderDecoderModel < source > ( *args **kwargs ) Parameters config (EncoderDecoderConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TFEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with one of the base model classes of the library as encoder and another one as decoder when created with the from_pretrained() class method for the encoder and from_pretrained() class method for the decoder. call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None decoder_input_ids: np.ndarray | tf.Tensor | None = None decoder_attention_mask: np.ndarray | tf.Tensor | None = None encoder_outputs: np.ndarray | tf.Tensor | None = None past_key_values: Tuple[Tuple[tf.Tensor]] | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None decoder_inputs_embeds: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False **kwargs ) → transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (np.ndarray or tf.Tensor of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? If past_key_values is used, optionally only the last decoder_input_ids have to be input (see past_key_values). Provide for sequence to sequence training to the decoder. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. decoder_attention_mask (np.ndarray or tf.Tensor of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. encoder_outputs (tuple(tuple(tf.Tensor), optional) — This tuple must consist of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) is a tensor of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. past_key_values (tuple(tuple(tf.Tensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. labels (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss for the decoder. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — If set to True, the model will return a ~utils.Seq2SeqLMOutput instead of a plain tuple. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). kwargs (optional) — Remaining dictionary of keyword arguments. Keyword arguments come in two flavors: Without a prefix which will be input as **encoder_kwargs for the encoder forward function. With a decoder_ prefix which will be input as `**decoder_kwargs“ for the decoder forward function. A transformers.modeling_tf_outputs.TFSeq2SeqLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EncoderDecoderConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The TFEncoderDecoderModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import TFEncoderDecoderModel, BertTokenizer >>> >>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2") >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") >>> >>> input_ids = tokenizer.encode( ... "Hello, my dog is cute", add_special_tokens=True, return_tensors="tf" ... ) >>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids) >>> >>> outputs = model(input_ids=input_ids, decoder_input_ids=input_ids, labels=input_ids) >>> loss, logits = outputs.loss, outputs.logits >>> >>> model.save_pretrained("bert2gpt2") >>> model = TFEncoderDecoderModel.from_pretrained("bert2gpt2") >>> >>> generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.bos_token_id) from_encoder_decoder_pretrained < source > ( encoder_pretrained_model_name_or_path: str = None decoder_pretrained_model_name_or_path: str = None *model_args **kwargs ) Parameters encoder_pretrained_model_name_or_path (str, optional) — Information necessary to initiate the encoder. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a pytorch index checkpoint file (e.g, ./pt_model/). In this case, encoder_from_pt should be set to True. decoder_pretrained_model_name_or_path (str, optional, defaults to None) — Information necessary to initiate the decoder. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a pytorch checkpoint file (e.g, ./pt_model/). In this case, decoder_from_pt should be set to True. model_args (remaining positional arguments, optional) — All remaning positional arguments will be passed to the underlying model’s __init__ method. kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). To update the encoder configuration, use the prefix encoder_ for each configuration parameter. To update the decoder configuration, use the prefix decoder_ for each configuration parameter. To update the parent model configuration, do not use a prefix for each configuration parameter. Behaves differently depending on whether a config is provided or automatically loaded. Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. Example: >>> from transformers import TFEncoderDecoderModel >>> >>> model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "gpt2") >>> >>> model.save_pretrained("./bert2gpt2") >>> >>> model = TFEncoderDecoderModel.from_pretrained("./bert2gpt2") FlaxEncoderDecoderModel class transformers.FlaxEncoderDecoderModel < source > ( config: EncoderDecoderConfig input_shape: typing.Optional[typing.Tuple] = None seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (EncoderDecoderConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). This class can be used to initialize a sequence-to-sequence model with any pretrained autoencoding model as the encoder and any pretrained autoregressive model as the decoder. The encoder is loaded via from_pretrained() function and the decoder is loaded via from_pretrained() function. Cross-attention layers are automatically added to the decoder and should be fine-tuned on a downstream generative task, like summarization. The effectiveness of initializing sequence-to-sequence models with pretrained checkpoints for sequence generation tasks was shown in Leveraging Pre-trained Checkpoints for Sequence Generation Tasks by Sascha Rothe, Shashi Narayan, Aliaksei Severyn. Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu. After such an Encoder Decoder model has been trained/fine-tuned, it can be saved/loaded just like any other models (see the examples for more information). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. FlaxEncoderDecoderModel is a generic model class that will be instantiated as a transformer architecture with the module (flax.nn.Module) of one of the base model classes of the library as encoder module and another one as decoder module when created with the :meth~transformers.FlaxAutoModel.from_pretrained class method for the encoder and :meth~transformers.FlaxAutoModelForCausalLM.from_pretrained class method for the decoder. __call__ < source > ( input_ids: Array attention_mask: typing.Optional[jax.Array] = None decoder_input_ids: typing.Optional[jax.Array] = None decoder_attention_mask: typing.Optional[jax.Array] = None position_ids: typing.Optional[jax.Array] = None decoder_position_ids: typing.Optional[jax.Array] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None train: bool = False params: dict = None dropout_rng: PRNGKey = None ) → transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or tuple(torch.FloatTensor) Parameters input_ids (jnp.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (jnp.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? decoder_input_ids (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Indices of decoder input sequence tokens in the vocabulary. Indices can be obtained using PreTrainedTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are decoder input IDs? For sequence to sequence training, decoder_input_ids should be provided. decoder_input_ids should be created outside of the model by shifting the labels to the right, replacing -100 by the pad_token_id and prepending them with the decoder_start_token_id. decoder_attention_mask (jnp.ndarray of shape (batch_size, target_sequence_length), optional) — Default behavior: generate a tensor that ignores pad tokens in decoder_input_ids. Causal mask will also be used by default. position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.encoder.max_position_embeddings - 1]. decoder_position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each decoder input sequence tokens in the position embeddings. Selected in the range [0, config.decoder.max_position_embeddings - 1]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — If set to True, the model will return a ~utils.FlaxSeq2SeqLMOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EncoderDecoderConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxEncoderDecoderModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import FlaxEncoderDecoderModel, BertTokenizer, GPT2Tokenizer >>> >>> model = FlaxEncoderDecoderModel.from_pretrained("patrickvonplaten/bert2gpt2-cnn_dailymail-fp16") >>> >>> tokenizer_input = BertTokenizer.from_pretrained("bert-base-cased") >>> tokenizer_output = GPT2Tokenizer.from_pretrained("gpt2") >>> article = '''Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members >>> singing a racist chant. SAE's national chapter suspended the students, >>> but University of Oklahoma President David Boren took it a step further, >>> saying the university's affiliation with the fraternity is permanently done.''' >>> input_ids = tokenizer_input(article, add_special_tokens=True, return_tensors="np").input_ids >>> >>> model.config.eos_token_id = model.config.decoder.eos_token_id >>> model.config.pad_token_id = model.config.eos_token_id >>> sequences = model.generate(input_ids, num_beams=4, max_length=12).sequences >>> summary = tokenizer_output.batch_decode(sequences, skip_special_tokens=True)[0] >>> assert summary == "SAS Alpha Epsilon suspended Sigma Alpha Epsilon members" from_encoder_decoder_pretrained < source > ( encoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None decoder_pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] = None *model_args **kwargs ) Parameters encoder_pretrained_model_name_or_path (Union[str, os.PathLike], optional) — Information necessary to initiate the encoder. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. decoder_pretrained_model_name_or_path (Union[str, os.PathLike], optional, defaults to None) — Information necessary to initiate the decoder. Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. model_args (remaining positional arguments, optional) — All remaning positional arguments will be passed to the underlying model’s __init__ method. kwargs (remaining dictionary of keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). To update the encoder configuration, use the prefix encoder_ for each configuration parameter. To update the decoder configuration, use the prefix decoder_ for each configuration parameter. To update the parent model configuration, do not use a prefix for each configuration parameter. Behaves differently depending on whether a config is provided or automatically loaded. Instantiate an encoder and a decoder from one or two base classes of the library from pretrained model checkpoints. Example: >>> from transformers import FlaxEncoderDecoderModel >>> >>> model = FlaxEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-cased", "gpt2") >>> >>> model.save_pretrained("./bert2gpt2") >>> >>> model = FlaxEncoderDecoderModel.from_pretrained("./bert2gpt2")
https://huggingface.co/docs/transformers/main_classes/output
Model outputs All models have outputs that are instances of subclasses of ModelOutput. Those are data structures containing all the information returned by the model, but that can also be used as tuples or dictionaries. Let’s see how this looks in an example: from transformers import BertTokenizer, BertForSequenceClassification import torch tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = BertForSequenceClassification.from_pretrained("bert-base-uncased") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") labels = torch.tensor([1]).unsqueeze(0) outputs = model(**inputs, labels=labels) The outputs object is a SequenceClassifierOutput, as we can see in the documentation of that class below, it means it has an optional loss, a logits, an optional hidden_states and an optional attentions attribute. Here we have the loss since we passed along labels, but we don’t have hidden_states and attentions because we didn’t pass output_hidden_states=True or output_attentions=True. When passing output_hidden_states=True you may expect the outputs.hidden_states[-1] to match outputs.last_hidden_states exactly. However, this is not always the case. Some models apply normalization or subsequent process to the last hidden state when it’s returned. You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get None. Here for instance outputs.loss is the loss computed by the model, and outputs.attentions is None. When considering our outputs object as tuple, it only considers the attributes that don’t have None values. Here for instance, it has two elements, loss then logits, so will return the tuple (outputs.loss, outputs.logits) for instance. When considering our outputs object as dictionary, it only considers the attributes that don’t have None values. Here for instance, it has two keys that are loss and logits. We document here the generic model outputs that are used by more than one model type. Specific output types are documented on their corresponding model page. ModelOutput class transformers.utils.ModelOutput < source > ( *args **kwargs ) Base class for all model outputs as dataclass. Has a __getitem__ that allows indexing by integer or slice (like a tuple) or strings (like a dictionary) that will ignore the None attributes. Otherwise behaves like a regular python dictionary. You can’t unpack a ModelOutput directly. Use the to_tuple() method to convert it to a tuple before. Convert self to a tuple containing all the attributes/keys that are not None. BaseModelOutput class transformers.modeling_outputs.BaseModelOutput < source > ( last_hidden_state: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs, with potential hidden states and attentions. BaseModelOutputWithPooling class transformers.modeling_outputs.BaseModelOutputWithPooling < source > ( last_hidden_state: FloatTensor = None pooler_output: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs that also contains a pooling of the last hidden states. BaseModelOutputWithCrossAttentions class transformers.modeling_outputs.BaseModelOutputWithCrossAttentions < source > ( last_hidden_state: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Base class for model’s outputs, with potential hidden states and attentions. BaseModelOutputWithPoolingAndCrossAttentions class transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions < source > ( last_hidden_state: FloatTensor = None pooler_output: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Base class for model’s outputs that also contains a pooling of the last hidden states. BaseModelOutputWithPast class transformers.modeling_outputs.BaseModelOutputWithPast < source > ( last_hidden_state: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). BaseModelOutputWithPastAndCrossAttentions class transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions < source > ( last_hidden_state: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). Seq2SeqModelOutput class transformers.modeling_outputs.Seq2SeqModelOutput < source > ( last_hidden_state: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential decoding. CausalLMOutput class transformers.modeling_outputs.CausalLMOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for causal language model (or autoregressive) outputs. CausalLMOutputWithCrossAttentions class transformers.modeling_outputs.CausalLMOutputWithCrossAttentions < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Base class for causal language model (or autoregressive) outputs. CausalLMOutputWithPast class transformers.modeling_outputs.CausalLMOutputWithPast < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for causal language model (or autoregressive) outputs. MaskedLMOutput class transformers.modeling_outputs.MaskedLMOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for masked language models outputs. Seq2SeqLMOutput class transformers.modeling_outputs.Seq2SeqLMOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for sequence-to-sequence language models outputs. NextSentencePredictorOutput class transformers.modeling_outputs.NextSentencePredictorOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss. logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. SequenceClassifierOutput class transformers.modeling_outputs.SequenceClassifierOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sentence classification models. Seq2SeqSequenceClassifierOutput class transformers.modeling_outputs.Seq2SeqSequenceClassifierOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sequence-to-sequence sentence classification models. MultipleChoiceModelOutput class transformers.modeling_outputs.MultipleChoiceModelOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of multiple choice models. TokenClassifierOutput class transformers.modeling_outputs.TokenClassifierOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of token classification models. QuestionAnsweringModelOutput class transformers.modeling_outputs.QuestionAnsweringModelOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None start_logits: FloatTensor = None end_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of question answering models. Seq2SeqQuestionAnsweringModelOutput class transformers.modeling_outputs.Seq2SeqQuestionAnsweringModelOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None start_logits: FloatTensor = None end_logits: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sequence-to-sequence question answering models. Seq2SeqSpectrogramOutput class transformers.modeling_outputs.Seq2SeqSpectrogramOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None spectrogram: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Spectrogram generation loss. spectrogram (torch.FloatTensor of shape (batch_size, sequence_length, num_bins)) — The predicted spectrogram. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for sequence-to-sequence spectrogram outputs. SemanticSegmenterOutput class transformers.modeling_outputs.SemanticSegmenterOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel. The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of semantic segmentation models. ImageClassifierOutput class transformers.modeling_outputs.ImageClassifierOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of image classification models. ImageClassifierOutputWithNoAttention class transformers.modeling_outputs.ImageClassifierOutputWithNoAttention < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. Base class for outputs of image classification models. DepthEstimatorOutput class transformers.modeling_outputs.DepthEstimatorOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None predicted_depth: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. predicted_depth (torch.FloatTensor of shape (batch_size, height, width)) — Predicted depth for each pixel. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of depth estimation models. Wav2Vec2BaseModelOutput class transformers.modeling_outputs.Wav2Vec2BaseModelOutput < source > ( last_hidden_state: FloatTensor = None extract_features: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for models that have been trained with the Wav2Vec2 loss objective. XVectorOutput class transformers.modeling_outputs.XVectorOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None embeddings: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax. embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of Wav2Vec2ForXVector. Seq2SeqTSModelOutput class transformers.modeling_outputs.Seq2SeqTSModelOutput < source > ( last_hidden_state: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None loc: typing.Optional[torch.FloatTensor] = None scale: typing.Optional[torch.FloatTensor] = None static_features: typing.Optional[torch.FloatTensor] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the optional initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the optional initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to shift back to the original magnitude. scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to rescale back to the original magnitude. static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time. Base class for time series model’s encoder outputs that also contains pre-computed hidden states that can speed up sequential decoding. Seq2SeqTSPredictionOutput class transformers.modeling_outputs.Seq2SeqTSPredictionOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None params: typing.Optional[typing.Tuple[torch.FloatTensor]] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None loc: typing.Optional[torch.FloatTensor] = None scale: typing.Optional[torch.FloatTensor] = None static_features: typing.Optional[torch.FloatTensor] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when a future_values is provided) — Distributional loss. params (torch.FloatTensor of shape (batch_size, num_samples, num_params)) — Parameters of the chosen distribution. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. loc (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Shift values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to shift back to the original magnitude. scale (torch.FloatTensor of shape (batch_size,) or (batch_size, input_size), optional) — Scaling values of each time series’ context window which is used to give the model inputs of the same magnitude and then used to rescale back to the original magnitude. static_features (torch.FloatTensor of shape (batch_size, feature size), optional) — Static features of each time series’ in a batch which are copied to the covariates at inference time. Base class for time series model’s decoder outputs that also contain the loss as well as the parameters of the chosen distribution. SampleTSPredictionOutput class transformers.modeling_outputs.SampleTSPredictionOutput < source > ( sequences: FloatTensor = None ) Parameters sequences (torch.FloatTensor of shape (batch_size, num_samples, prediction_length) or (batch_size, num_samples, prediction_length, input_size)) — Sampled values from the chosen distribution. Base class for time series model’s predictions outputs that contains the sampled values from the chosen distribution. TFBaseModelOutput class transformers.modeling_tf_outputs.TFBaseModelOutput < source > ( last_hidden_state: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs, with potential hidden states and attentions. TFBaseModelOutputWithPooling class transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling < source > ( last_hidden_state: tf.Tensor = None pooler_output: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs that also contains a pooling of the last hidden states. TFBaseModelOutputWithPoolingAndCrossAttentions class transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions < source > ( last_hidden_state: tf.Tensor = None pooler_output: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None ) Parameters last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Base class for model’s outputs that also contains a pooling of the last hidden states. TFBaseModelOutputWithPast class transformers.modeling_tf_outputs.TFBaseModelOutputWithPast < source > ( last_hidden_state: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). TFBaseModelOutputWithPastAndCrossAttentions class transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions < source > ( last_hidden_state: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None ) Parameters last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). TFSeq2SeqModelOutput class transformers.modeling_tf_outputs.TFSeq2SeqModelOutput < source > ( last_hidden_state: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None ) Parameters last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential decoding. TFCausalLMOutput class transformers.modeling_tf_outputs.TFCausalLMOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for causal language model (or autoregressive) outputs. TFCausalLMOutputWithCrossAttentions class transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Base class for causal language model (or autoregressive) outputs. TFCausalLMOutputWithPast class transformers.modeling_tf_outputs.TFCausalLMOutputWithPast < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for causal language model (or autoregressive) outputs. TFMaskedLMOutput class transformers.modeling_tf_outputs.TFMaskedLMOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for masked language models outputs. TFSeq2SeqLMOutput class transformers.modeling_tf_outputs.TFSeq2SeqLMOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for sequence-to-sequence language models outputs. TFNextSentencePredictorOutput class transformers.modeling_tf_outputs.TFNextSentencePredictorOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when next_sentence_label is provided) — Next sentence prediction loss. logits (tf.Tensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. TFSequenceClassifierOutput class transformers.modeling_tf_outputs.TFSequenceClassifierOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sentence classification models. TFSeq2SeqSequenceClassifierOutput class transformers.modeling_tf_outputs.TFSeq2SeqSequenceClassifierOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None cross_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (1,), optional, returned when label is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length) encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sequence-to-sequence sentence classification models. TFMultipleChoiceModelOutput class transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of multiple choice models. TFTokenClassifierOutput class transformers.modeling_tf_outputs.TFTokenClassifierOutput < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of token classification models. TFQuestionAnsweringModelOutput class transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput < source > ( loss: tf.Tensor | None = None start_logits: tf.Tensor = None end_logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of question answering models. TFSeq2SeqQuestionAnsweringModelOutput class transformers.modeling_tf_outputs.TFSeq2SeqQuestionAnsweringModelOutput < source > ( loss: tf.Tensor | None = None start_logits: tf.Tensor = None end_logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None decoder_hidden_states: Tuple[tf.Tensor] | None = None decoder_attentions: Tuple[tf.Tensor] | None = None encoder_last_hidden_state: tf.Tensor | None = None encoder_hidden_states: Tuple[tf.Tensor] | None = None encoder_attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) of the decoder that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. encoder_last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sequence-to-sequence question answering models. FlaxBaseModelOutput class transformers.modeling_flax_outputs.FlaxBaseModelOutput < source > ( last_hidden_state: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs, with potential hidden states and attentions. “Returns a new object replacing the specified fields with new values. FlaxBaseModelOutputWithPast class transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPast < source > ( last_hidden_state: Array = None past_key_values: typing.Union[typing.Dict[str, jax.Array], NoneType] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. past_key_values (Dict[str, jnp.ndarray]) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs, with potential hidden states and attentions. “Returns a new object replacing the specified fields with new values. FlaxBaseModelOutputWithPooling class transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPooling < source > ( last_hidden_state: Array = None pooler_output: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (jnp.ndarray of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model’s outputs that also contains a pooling of the last hidden states. “Returns a new object replacing the specified fields with new values. FlaxBaseModelOutputWithPastAndCrossAttentions class transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions < source > ( last_hidden_state: Array = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. Base class for model’s outputs that may also contain a past key/values (to speed up sequential decoding). “Returns a new object replacing the specified fields with new values. FlaxSeq2SeqModelOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqModelOutput < source > ( last_hidden_state: Array = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for model encoder’s outputs that also contains : pre-computed hidden states that can speed up sequential decoding. “Returns a new object replacing the specified fields with new values. FlaxCausalLMOutputWithCrossAttentions class transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions < source > ( logits: Array = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. Base class for causal language model (or autoregressive) outputs. “Returns a new object replacing the specified fields with new values. FlaxMaskedLMOutput class transformers.modeling_flax_outputs.FlaxMaskedLMOutput < source > ( logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for masked language models outputs. “Returns a new object replacing the specified fields with new values. FlaxSeq2SeqLMOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqLMOutput < source > ( logits: Array = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for sequence-to-sequence language models outputs. “Returns a new object replacing the specified fields with new values. FlaxNextSentencePredictorOutput class transformers.modeling_flax_outputs.FlaxNextSentencePredictorOutput < source > ( logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. “Returns a new object replacing the specified fields with new values. FlaxSequenceClassifierOutput class transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput < source > ( logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sentence classification models. “Returns a new object replacing the specified fields with new values. FlaxSeq2SeqSequenceClassifierOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqSequenceClassifierOutput < source > ( logits: Array = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sequence-to-sequence sentence classification models. “Returns a new object replacing the specified fields with new values. FlaxMultipleChoiceModelOutput class transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput < source > ( logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of multiple choice models. “Returns a new object replacing the specified fields with new values. FlaxTokenClassifierOutput class transformers.modeling_flax_outputs.FlaxTokenClassifierOutput < source > ( logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of token classification models. “Returns a new object replacing the specified fields with new values. FlaxQuestionAnsweringModelOutput class transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput < source > ( start_logits: Array = None end_logits: Array = None hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of question answering models. “Returns a new object replacing the specified fields with new values. FlaxSeq2SeqQuestionAnsweringModelOutput class transformers.modeling_flax_outputs.FlaxSeq2SeqQuestionAnsweringModelOutput < source > ( start_logits: Array = None end_logits: Array = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[jax.Array]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None decoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None cross_attentions: typing.Optional[typing.Tuple[jax.Array]] = None encoder_last_hidden_state: typing.Optional[jax.Array] = None encoder_hidden_states: typing.Optional[typing.Tuple[jax.Array]] = None encoder_attentions: typing.Optional[typing.Tuple[jax.Array]] = None ) Parameters start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. decoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sequence-to-sequence question answering models. “Returns a new object replacing the specified fields with new values.
https://huggingface.co/docs/transformers/model_doc/electra
ELECTRA Overview The ELECTRA model was proposed in the paper ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ELECTRA is a new pretraining approach which trains two transformer models: the generator and the discriminator. The generator’s role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we’re interested in, tries to identify which tokens were replaced by the generator in the sequence. The abstract from the paper is the following: Masked language modeling (MLM) pretraining methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pretraining task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pretraining task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute. Tips: ELECTRA is the pretraining approach, therefore there is nearly no changes done to the underlying model: BERT. The only change is the separation of the embedding size and the hidden size: the embedding size is generally smaller, while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection layer is used. ELECTRA is a transformer model pretrained with the use of another (small) masked language model. The inputs are corrupted by that language model, which takes an input text that is randomly masked and outputs a text in which ELECTRA has to predict which token is an original and which one has been replaced. Like for GAN training, the small language model is trained for a few steps (but with the original texts as objective, not to fool the ELECTRA model like in a traditional GAN setting) then the ELECTRA model is trained for a few steps. The ELECTRA checkpoints saved using Google Research’s implementation contain both the generator and discriminator. The conversion script requires the user to name which model to export into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all available ELECTRA models, however. This means that the discriminator may be loaded in the ElectraForMaskedLM model, and the generator may be loaded in the ElectraForPreTraining model (the classification head will be randomly initialized as it doesn’t exist in the generator). This model was contributed by lysandre. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide ElectraConfig class transformers.ElectraConfig < source > ( vocab_size = 30522 embedding_size = 128 hidden_size = 256 num_hidden_layers = 12 num_attention_heads = 4 intermediate_size = 1024 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 summary_type = 'first' summary_use_proj = True summary_activation = 'gelu' summary_last_dropout = 0.1 pad_token_id = 0 position_embedding_type = 'absolute' use_cache = True classifier_dropout = None **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the ELECTRA model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ElectraModel or TFElectraModel. embedding_size (int, optional, defaults to 128) — Dimensionality of the encoder layers and the pooler layer. hidden_size (int, optional, defaults to 256) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 1024) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling ElectraModel or TFElectraModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. summary_type (str, optional, defaults to "first") — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Has to be one of the following options: "last": Take the last token hidden state (like XLNet). "first": Take the first token hidden state (like BERT). "mean": Take the mean of all tokens hidden states. "cls_index": Supply a Tensor of classification token position (like GPT/GPT-2). "attn": Not implemented now, use multi-head attention. summary_use_proj (bool, optional, defaults to True) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Whether or not to add a projection after the vector extraction. summary_activation (str, optional) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Pass "gelu" for a gelu activation to the output, any other value will result in no activation. summary_last_dropout (float, optional, defaults to 0.0) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. The dropout ratio to be used after the projection and activation. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. classifier_dropout (float, optional) — The dropout ratio for the classification head. This is the configuration class to store the configuration of a ElectraModel or a TFElectraModel. It is used to instantiate a ELECTRA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ELECTRA google/electra-small-discriminator architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import ElectraConfig, ElectraModel >>> >>> configuration = ElectraConfig() >>> >>> model = ElectraModel(configuration) >>> >>> configuration = model.config ElectraTokenizer class transformers.ElectraTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. do_basic_tokenize (bool, optional, defaults to True) — Whether or not to do basic tokenization before WordPiece. never_split (Iterable, optional) — Collection of tokens which will never be split during tokenization. Only has an effect when do_basic_tokenize=True unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original Electra). Construct a Electra tokenizer. Based on WordPiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Electra sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] Converts a sequence of tokens (string) in a single string. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Electra sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. ElectraTokenizerFast class transformers.ElectraTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. clean_text (bool, optional, defaults to True) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original ELECTRA). wordpieces_prefix (str, optional, defaults to "##") — The prefix for subwords. Construct a “fast” ELECTRA tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A ELECTRA sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ELECTRA sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). Electra specific outputs class transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss of the ELECTRA objective. logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of ElectraForPreTraining. class transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput < source > ( logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) — Total loss of the ELECTRA objective. logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of TFElectraForPreTraining. ElectraModel class transformers.ElectraModel < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different. Both the generator and discriminator checkpoints may be loaded into this model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The ElectraModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ElectraModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = ElectraModel.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ElectraForPreTraining class transformers.ElectraForPreTraining < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a binary classification head on top as used during pretraining for identifying generated tokens. It is recommended to load the discriminator checkpoint into that model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the ELECTRA loss. Input should be a sequence of tokens (see input_ids docstring) Indices should be in [0, 1]: 0 indicates the token is an original token, 1 indicates the token was replaced. A transformers.models.electra.modeling_electra.ElectraForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss of the ELECTRA objective. logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ElectraForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import ElectraForPreTraining, AutoTokenizer >>> import torch >>> discriminator = ElectraForPreTraining.from_pretrained("google/electra-base-discriminator") >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-base-discriminator") >>> sentence = "The quick brown fox jumps over the lazy dog" >>> fake_sentence = "The quick brown fox fake over the lazy dog" >>> fake_tokens = tokenizer.tokenize(fake_sentence, add_special_tokens=True) >>> fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") >>> discriminator_outputs = discriminator(fake_inputs) >>> predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) >>> fake_tokens ['[CLS]', 'the', 'quick', 'brown', 'fox', 'fake', 'over', 'the', 'lazy', 'dog', '[SEP]'] >>> predictions.squeeze().tolist() [0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0] ElectraForCausalLM class transformers.ElectraForCausalLM < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model with a language modeling head on top for CLM fine-tuning. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The ElectraForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ElectraForCausalLM, ElectraConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-base-generator") >>> config = ElectraConfig.from_pretrained("google/electra-base-generator") >>> config.is_decoder = True >>> model = ElectraForCausalLM.from_pretrained("google/electra-base-generator", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits ElectraForMaskedLM class transformers.ElectraForMaskedLM < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a language modeling head on top. Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ElectraForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ElectraForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-generator") >>> model = ElectraForMaskedLM.from_pretrained("google/electra-small-generator") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> tokenizer.decode(predicted_token_id) 'paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 1.22 ElectraForSequenceClassification class transformers.ElectraForSequenceClassification < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ElectraForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, ElectraForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-emotion") >>> model = ElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'joy' >>> >>> num_labels = len(model.config.id2label) >>> model = ElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.06 Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, ElectraForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-emotion") >>> model = ElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = ElectraForSequenceClassification.from_pretrained( ... "bhadresh-savani/electra-base-emotion", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ElectraForMultipleChoice class transformers.ElectraForMultipleChoice < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ElectraForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ElectraForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = ElectraForMultipleChoice.from_pretrained("google/electra-small-discriminator") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ElectraForTokenClassification class transformers.ElectraForTokenClassification < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a token classification head on top. Both the discriminator and generator may be loaded into this model. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ElectraForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ElectraForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english") >>> model = ElectraForTokenClassification.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> predicted_tokens_classes ['B-LOC', 'B-ORG', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'B-LOC', 'I-LOC'] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.11 ElectraForQuestionAnswering class transformers.ElectraForQuestionAnswering < source > ( config ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ElectraForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ElectraForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-squad2") >>> model = ElectraForQuestionAnswering.from_pretrained("bhadresh-savani/electra-base-squad2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True) 'a nice puppet' >>> >>> target_start_index = torch.tensor([11]) >>> target_end_index = torch.tensor([12]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss >>> round(loss.item(), 2) 2.64 TFElectraModel class transformers.TFElectraModel < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different. Both the generator and discriminator checkpoints may be loaded into this model. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation A transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The TFElectraModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFElectraModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = TFElectraModel.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFElectraForPreTraining class transformers.TFElectraForPreTraining < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a binary classification head on top as used during pretraining for identifying generated tokens. Even though both the discriminator and generator may be loaded into this model, the discriminator is the only model of the two to have the correct classification head to be used for this model. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.models.electra.modeling_tf_electra.TFElectraForPreTrainingOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (optional, returned when labels is provided, tf.Tensor of shape (1,)) — Total loss of the ELECTRA objective. logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFElectraForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import tensorflow as tf >>> from transformers import AutoTokenizer, TFElectraForPreTraining >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = TFElectraForPreTraining.from_pretrained("google/electra-small-discriminator") >>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :] >>> outputs = model(input_ids) >>> scores = outputs[0] TFElectraForMaskedLM class transformers.TFElectraForMaskedLM < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a language modeling head on top. Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFElectraForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFElectraForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-generator") >>> model = TFElectraForMaskedLM.from_pretrained("google/electra-small-generator") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> tokenizer.decode(predicted_token_id) 'paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(float(outputs.loss), 2) 1.22 TFElectraForSequenceClassification class transformers.TFElectraForSequenceClassification < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFElectraForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFElectraForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-emotion") >>> model = TFElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> model.config.id2label[predicted_class_id] 'joy' >>> >>> num_labels = len(model.config.id2label) >>> model = TFElectraForSequenceClassification.from_pretrained("bhadresh-savani/electra-base-emotion", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss >>> round(float(loss), 2) 0.06 TFElectraForMultipleChoice class transformers.TFElectraForMultipleChoice < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFElectraForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFElectraForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = TFElectraForMultipleChoice.from_pretrained("google/electra-small-discriminator") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits TFElectraForTokenClassification class transformers.TFElectraForTokenClassification < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a token classification head on top. Both the discriminator and generator may be loaded into this model. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFElectraForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFElectraForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english") >>> model = TFElectraForTokenClassification.from_pretrained("bhadresh-savani/electra-base-discriminator-finetuned-conll03-english") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> predicted_tokens_classes ['B-LOC', 'B-ORG', 'O', 'O', 'O', 'O', 'O', 'B-LOC', 'O', 'B-LOC', 'I-LOC'] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) >>> round(float(loss), 2) 0.11 TFElectraForQuestionAnswering class transformers.TFElectraForQuestionAnswering < source > ( *args **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFElectraForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFElectraForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("bhadresh-savani/electra-base-squad2") >>> model = TFElectraForQuestionAnswering.from_pretrained("bhadresh-savani/electra-base-squad2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens) 'a nice puppet' >>> >>> target_start_index = tf.constant([11]) >>> target_end_index = tf.constant([12]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) >>> round(float(loss), 2) 2.64 FlaxElectraModel class transformers.FlaxElectraModel < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Electra Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraModel >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraModel.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxElectraForPreTraining class transformers.FlaxElectraForPreTraining < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a binary classification head on top as used during pretraining for identifying generated tokens. It is recommended to load the discriminator checkpoint into that model. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.models.electra.modeling_flax_electra.FlaxElectraForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.electra.modeling_flax_electra.FlaxElectraForPreTrainingOutput or tuple(torch.FloatTensor) A transformers.models.electra.modeling_flax_electra.FlaxElectraForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForPreTraining >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForPreTraining.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits FlaxElectraForCausalLM class transformers.FlaxElectraForCausalLM < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra Model with a language modeling head on top (a linear layer on top of the hidden-states output) e.g for autoregressive tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForCausalLM.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1] FlaxElectraForMaskedLM class transformers.FlaxElectraForMaskedLM < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra Model with a language modeling head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForMaskedLM.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxElectraForSequenceClassification class transformers.FlaxElectraForSequenceClassification < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForSequenceClassification.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxElectraForMultipleChoice class transformers.FlaxElectraForMultipleChoice < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForMultipleChoice >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForMultipleChoice.from_pretrained("google/electra-small-discriminator") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True) >>> outputs = model(**{k: v[None, :] for k, v in encoding.items()}) >>> logits = outputs.logits FlaxElectraForTokenClassification class transformers.FlaxElectraForTokenClassification < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Electra model with a token classification head on top. Both the discriminator and generator may be loaded into this model. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForTokenClassification.from_pretrained("google/electra-small-discriminator") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxElectraForQuestionAnswering class transformers.FlaxElectraForQuestionAnswering < source > ( config: ElectraConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True gradient_checkpointing: bool = False **kwargs ) Parameters config (ElectraConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None token_type_ids = None position_ids = None head_mask = None encoder_hidden_states = None encoder_attention_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None past_key_values: dict = None ) → transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) -- Mask to nullify selected heads of the attention modules. Mask values selected in [0, 1]`: 1 indicates the head is not masked, 0 indicates the head is masked. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ElectraConfig) and inputs. start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxElectraPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxElectraForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("google/electra-small-discriminator") >>> model = FlaxElectraForQuestionAnswering.from_pretrained("google/electra-small-discriminator") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="jax") >>> outputs = model(**inputs) >>> start_scores = outputs.start_logits >>> end_scores = outputs.end_logits
https://huggingface.co/docs/transformers/model_doc/ernie
ERNIE Overview ERNIE is a series of powerful models proposed by baidu, especially in Chinese tasks, including [ERNIE1.0](https://arxiv.org/abs/1904.09223), [ERNIE2.0](https://ojs.aaai.org/index.php/AAAI/article/view/6428), [ERNIE3.0](https://arxiv.org/abs/2107.02137), [ERNIE-Gram](https://arxiv.org/abs/2010.12148), [ERNIE-health](https://arxiv.org/abs/2110.07244), etc. These models are contributed by nghuyong and the official code can be found in PaddleNLP (in PaddlePaddle). How to use Take `ernie-1.0-base-zh` as an example: from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") model = AutoModel.from_pretrained("nghuyong/ernie-1.0-base-zh") Supported Models Model Name Language Description ernie-1.0-base-zh Chinese Layer:12, Heads:12, Hidden:768 ernie-2.0-base-en English Layer:12, Heads:12, Hidden:768 ernie-2.0-large-en English Layer:24, Heads:16, Hidden:1024 ernie-3.0-base-zh Chinese Layer:12, Heads:12, Hidden:768 ernie-3.0-medium-zh Chinese Layer:6, Heads:12, Hidden:768 ernie-3.0-mini-zh Chinese Layer:6, Heads:12, Hidden:384 ernie-3.0-micro-zh Chinese Layer:4, Heads:12, Hidden:384 ernie-3.0-nano-zh Chinese Layer:4, Heads:12, Hidden:312 ernie-health-zh Chinese Layer:12, Heads:12, Hidden:768 ernie-gram-zh Chinese Layer:12, Heads:12, Hidden:768 You can find all the supported models from huggingface’s model hub: huggingface.co/nghuyong, and model details from paddle’s official repo: PaddleNLP and ERNIE. Documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide ErnieConfig class transformers.ErnieConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 task_type_vocab_size = 3 use_task_id = False initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' use_cache = True classifier_dropout = None **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the ERNIE model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ErnieModel or TFErnieModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling ErnieModel or TFErnieModel. task_type_vocab_size (int, optional, defaults to 3) — The vocabulary size of the task_type_ids for ERNIE2.0/ERNIE3.0 model use_task_id (bool, optional, defaults to False) — Whether or not the model support task_type_ids initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. classifier_dropout (float, optional) — The dropout ratio for the classification head. This is the configuration class to store the configuration of a ErnieModel or a TFErnieModel. It is used to instantiate a ERNIE model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ERNIE nghuyong/ernie-3.0-base-zh architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import ErnieConfig, ErnieModel >>> >>> configuration = ErnieConfig() >>> >>> model = ErnieModel(configuration) >>> >>> configuration = model.config Ernie specific outputs class transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None prediction_logits: FloatTensor = None seq_relationship_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of ErnieForPreTraining. ErnieModel class transformers.ErnieModel < source > ( config add_pooling_layer = True ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Ernie Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The ErnieModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> model = ErnieModel.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ErnieForPreTraining class transformers.ErnieForPreTraining < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None next_sentence_label: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional): Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] next_sentence_label (torch.LongTensor of shape (batch_size,), optional): Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]: 0 indicates sequence B is a continuation of sequence A, 1 indicates sequence B is a random sequence. kwargs (Dict[str, any], optional, defaults to {}): Used to hide legacy arguments that have been deprecated. A transformers.models.ernie.modeling_ernie.ErnieForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieConfig) and inputs. loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieForPreTraining >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> model = ErnieForPreTraining.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.prediction_logits >>> seq_relationship_logits = outputs.seq_relationship_logits ErnieForCausalLM class transformers.ErnieForCausalLM < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with a language modeling head on top for CLM fine-tuning. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size] past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The ErnieForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, ErnieForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> model = ErnieForCausalLM.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits ErnieForMaskedLM class transformers.ErnieForMaskedLM < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with a language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> model = ErnieForMaskedLM.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> tokenizer.decode(predicted_token_id) 'paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 0.88 ErnieForNextSentencePrediction class transformers.ErnieForNextSentencePrediction < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with a next sentence prediction (classification) head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring). Indices should be in [0, 1]: 0 indicates sequence B is a continuation of sequence A, 1 indicates sequence B is a random sequence. A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss. logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieForNextSentencePrediction forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieForNextSentencePrediction >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> model = ErnieForNextSentencePrediction.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> next_sentence = "The sky is blue due to the shorter wavelength of blue light." >>> encoding = tokenizer(prompt, next_sentence, return_tensors="pt") >>> outputs = model(**encoding, labels=torch.LongTensor([1])) >>> logits = outputs.logits >>> assert logits[0, 0] < logits[0, 1] ErnieForSequenceClassification class transformers.ErnieForSequenceClassification < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). The ErnieForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ErnieForMultipleChoice class transformers.ErnieForMultipleChoice < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> model = ErnieForMultipleChoice.from_pretrained("nghuyong/ernie-1.0-base-zh") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ErnieForTokenClassification class transformers.ErnieForTokenClassification < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. The ErnieForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. ErnieForQuestionAnswering class transformers.ErnieForQuestionAnswering < source > ( config ) Parameters config (ErnieConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Ernie Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None task_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? task_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Task type embedding is a special embedding to represent the characteristic of different tasks, such as word-aware pre-training task, structure-aware pre-training task and semantic-aware pre-training task. We assign a task_type_id to each task and the task_type_id is in the range `[0, config.task_type_vocab_size-1] position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. The ErnieForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
https://huggingface.co/docs/transformers/model_doc/flan-ul2
FLAN-UL2 Overview Flan-UL2 is an encoder decoder model based on the T5 architecture. It uses the same configuration as the UL2 model released earlier last year. It was fine tuned using the “Flan” prompt tuning and dataset collection. Similiar to Flan-T5, one can directly use FLAN-UL2 weights without finetuning the model: According ot the original blog here are the notable improvements: The original UL2 model was only trained with receptive field of 512, which made it non-ideal for N-shot prompting where N is large. The Flan-UL2 checkpoint uses a receptive field of 2048 which makes it more usable for few-shot in-context learning. The original UL2 model also had mode switch tokens that was rather mandatory to get good performance. However, they were a little cumbersome as this requires often some changes during inference or finetuning. In this update/change, we continue training UL2 20B for an additional 100k steps (with small batch) to forget “mode tokens” before applying Flan instruction tuning. This Flan-UL2 checkpoint does not require mode tokens anymore. Google has released the following variants: One can refer to T5’s documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model. The original checkpoints can be found here. Running on low resource devices The model is pretty heavy (~40GB in half precision) so if you just want to run the model, make sure you load your model in 8bit, and use device_map="auto" to make sure you don’t have any OOM issue! >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-ul2", load_in_8bit=True, device_map="auto") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-ul2") >>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['In a large skillet, brown the ground beef and onion over medium heat. Add the garlic'] Inference The inference protocol is exaclty the same as any T5 model, please have a look at the T5’s documentation page for more details.
https://huggingface.co/docs/transformers/model_doc/falcon
Falcon Overview Falcon is a class of causal decoder-only models built by TII. The largest Falcon checkpoints have been trained on >=1T tokens of text, with a particular emphasis on the RefinedWeb corpus. They are made available under the Apache 2.0 license. Falcon’s architecture is modern and optimized for inference, with multi-query attention and support for efficient attention variants like FlashAttention. Both ‘base’ models trained only as causal language models as well as ‘instruct’ models that have received further fine-tuning are available. Falcon models are (as of 2023) some of the largest and most powerful open-source language models, and consistently rank highly in the OpenLLM leaderboard. Converting custom checkpoints Falcon models were initially added to the Hugging Face Hub as custom code checkpoints. However, Falcon is now fully supported in the Transformers library. If you fine-tuned a model from a custom code checkpoint, we recommend converting your checkpoint to the new in-library format, as this should give significant improvements to stability and performance, especially for generation, as well as removing the need to use trust_remote_code=True! You can convert custom code checkpoints to full Transformers checkpoints using the convert_custom_code_checkpoint.py script located in the Falcon model directory of the Transformers library. To use this script, simply call it with python convert_custom_code_checkpoint.py --checkpoint_dir my_model. This will convert your checkpoint in-place, and you can immediately load it from the directory afterwards with e.g. from_pretrained(). If your model hasn’t been uploaded to the Hub, we recommend making a backup before attempting the conversion, just in case! FalconConfig class transformers.FalconConfig < source > ( vocab_size = 65024 hidden_size = 4544 num_hidden_layers = 32 num_attention_heads = 71 layer_norm_epsilon = 1e-05 initializer_range = 0.02 use_cache = True hidden_dropout = 0.0 attention_dropout = 0.0 num_kv_heads = None alibi = False new_decoder_architecture = False multi_query = True parallel_attn = True bias = False max_position_embeddings = 2048 rope_theta = 10000.0 rope_scaling = None bos_token_id = 11 eos_token_id = 11 **kwargs ) Parameters vocab_size (int, optional, defaults to 65024) — Vocabulary size of the Falcon model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling FalconModel hidden_size (int, optional, defaults to 4544) — Dimension of the hidden representations. num_hidden_layers (int, optional, defaults to 32) — Number of hidden layers in the Transformer decoder. num_attention_heads (int, optional, defaults to 71) — Number of attention heads for each attention layer in the Transformer encoder. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (bool, optional, defaults to True) — Whether the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. hidden_dropout (float, optional, defaults to 0.0) — The dropout probability for MLP layers. attention_dropout (float, optional, defaults to 0.0) — The dropout probability for attention layers. num_kv_heads (int, optional) — Number of key-value heads to use per attention layer. If unset, defaults to the same value as num_attention_heads. alibi (bool, optional, defaults to False) — Whether to use ALiBi positional biases during self-attention. new_decoder_architecture (bool, optional, defaults to False) — Whether to use the new (Falcon-40B) decoder architecture. If True, the multi_query and parallel_attn arguments are ignored, as the new decoder always uses parallel attention. multi_query (bool, optional, defaults to True) — Whether to use multi-query attention in the decoder. Ignored when new_decoder_architecture is True. parallel_attn (bool, optional, defaults to True) — Whether to compute attention in parallel with the feedforward layer. If False, they are consecutive instead, as in the original Transformer architecture. Ignored when new_decoder_architecture is True. bias (bool, optional, defaults to False) — Whether to use bias on Linear layers. max_position_embeddings (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with, when alibi is False. Pretrained Falcon models with RoPE support up to 2048 tokens. rope_theta (float, optional, defaults to 10000.0) — The base period of the RoPE embeddings. rope_scaling (Dict, optional) — Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format is {"type": strategy name, "factor": scaling factor}. When using this flag, don’t update max_position_embeddings to the expected new maximum. See the following thread for more information on how these scaling strategies behave: https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an experimental feature, subject to breaking API changes in future versions. bos_token_id (int, optional, defaults to 11) — The id of the “beginning-of-sequence” token. eos_token_id (int, optional, defaults to 11) — The id of the “end-of-sequence” token. This is the configuration class to store the configuration of a FalconModel. It is used to instantiate a Falcon model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the tiiuae/falcon-7b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FalconModel, FalconConfig >>> >>> configuration = FalconConfig(num_hidden_layers=2) >>> >>> model = FalconModel(configuration) >>> >>> configuration = model.config FalconModel class transformers.FalconModel < source > ( config: FalconConfig ) Parameters config (FalconConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Falcon Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_hidden_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FalconConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The FalconModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FalconModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Rocketknight1/falcon-rw-1b") >>> model = FalconModel.from_pretrained("Rocketknight1/falcon-rw-1b") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FalconForCausalLM class transformers.FalconForCausalLM < source > ( config: FalconConfig ) Parameters config (FalconConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Falcon Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_hidden_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FalconConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The FalconForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, FalconForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("Rocketknight1/falcon-rw-1b") >>> model = FalconForCausalLM.from_pretrained("Rocketknight1/falcon-rw-1b") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits FalconForSequenceClassification class transformers.FalconForSequenceClassification < source > ( config: FalconConfig ) Parameters config (FalconConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Falcon Model transformer with a sequence classification head on top (linear layer). FalconForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_hidden_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FalconConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FalconForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, FalconForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Rocketknight1/falcon-rw-1b") >>> model = FalconForSequenceClassification.from_pretrained("Rocketknight1/falcon-rw-1b") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = FalconForSequenceClassification.from_pretrained("Rocketknight1/falcon-rw-1b", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, FalconForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Rocketknight1/falcon-rw-1b") >>> model = FalconForSequenceClassification.from_pretrained("Rocketknight1/falcon-rw-1b", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = FalconForSequenceClassification.from_pretrained( ... "Rocketknight1/falcon-rw-1b", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss FalconForTokenClassification class transformers.FalconForTokenClassification < source > ( config: FalconConfig ) Parameters config (FalconConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Falcon Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[torch.Tensor, torch.Tensor], ...], NoneType] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_hidden_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FalconConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FalconForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FalconForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Rocketknight1/falcon-rw-1b") >>> model = FalconForTokenClassification.from_pretrained("Rocketknight1/falcon-rw-1b") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss FalconForQuestionAnswering class transformers.FalconForQuestionAnswering < source > ( config ) Parameters config (FalconConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Falcon Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_hidden_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. Each element of past_key_values is a tuple (past_key, past_value): past_key: [batch_size * num_heads, head_dim, kv_length] past_value: [batch_size * num_heads, kv_length, head_dim] attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. The FalconForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
https://huggingface.co/docs/transformers/model_doc/flan-t5
FLAN-T5 Overview FLAN-T5 was released in the paper Scaling Instruction-Finetuned Language Models - it is an enhanced version of T5 that has been finetuned in a mixture of tasks. One can directly use FLAN-T5 weights without finetuning the model: >>> from transformers import AutoModelForSeq2SeqLM, AutoTokenizer >>> model = AutoModelForSeq2SeqLM.from_pretrained("google/flan-t5-small") >>> tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-small") >>> inputs = tokenizer("A step by step recipe to make bolognese pasta:", return_tensors="pt") >>> outputs = model.generate(**inputs) >>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)) ['Pour a cup of bolognese into a large bowl and add the pasta'] FLAN-T5 includes the same improvements as T5 version 1.1 (see here for the full details of the model’s improvements.) Google has released the following variants: google/flan-t5-small google/flan-t5-base google/flan-t5-large google/flan-t5-xl google/flan-t5-xxl. One can refer to T5’s documentation page for all tips, code examples and notebooks. As well as the FLAN-T5 model card for more details regarding training and evaluation of the model. The original checkpoints can be found here.
https://huggingface.co/docs/transformers/model_doc/esm
ESM Overview This page provides code and pre-trained weights for Transformer protein language models from Meta AI's Fundamental AI Research Team, providing the state-of-the-art ESMFold and ESM-2, and the previously released ESM-1b and ESM-1v. Transformer protein language models were introduced in the paper [Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences](https://www.pnas.org/content/118/15/e2016239118) by Alexander Rives, Joshua Meier, Tom Sercu, Siddharth Goyal, Zeming Lin, Jason Liu, Demi Guo, Myle Ott, C. Lawrence Zitnick, Jerry Ma, and Rob Fergus. The first version of this paper was [preprinted in 2019](https://www.biorxiv.org/content/10.1101/622803v1?versioned=true). ESM-2 outperforms all tested single-sequence protein language models across a range of structure prediction tasks, and enables atomic resolution structure prediction. It was released with the paper Language models of protein sequences at the scale of evolution enable accurate structure prediction by Zeming Lin, Halil Akin, Roshan Rao, Brian Hie, Zhongkai Zhu, Wenting Lu, Allan dos Santos Costa, Maryam Fazel-Zarandi, Tom Sercu, Sal Candido and Alexander Rives. Also introduced in this paper was ESMFold. It uses an ESM-2 stem with a head that can predict folded protein structures with state-of-the-art accuracy. Unlike AlphaFold2, it relies on the token embeddings from the large pre-trained protein language model stem and does not perform a multiple sequence alignment (MSA) step at inference time, which means that ESMFold checkpoints are fully “standalone” - they do not require a database of known protein sequences and structures with associated external query tools to make predictions, and are much faster as a result. The abstract from “Biological structure and function emerge from scaling unsupervised learning to 250 million protein sequences” is In the field of artificial intelligence, a combination of scale in data and model capacity enabled by unsupervised learning has led to major advances in representation learning and statistical generation. In the life sciences, the anticipated growth of sequencing promises unprecedented data on natural sequence diversity. Protein language modeling at the scale of evolution is a logical step toward predictive and generative artificial intelligence for biology. To this end, we use unsupervised learning to train a deep contextual language model on 86 billion amino acids across 250 million protein sequences spanning evolutionary diversity. The resulting model contains information about biological properties in its representations. The representations are learned from sequence data alone. The learned representation space has a multiscale organization reflecting structure from the level of biochemical properties of amino acids to remote homology of proteins. Information about secondary and tertiary structure is encoded in the representations and can be identified by linear projections. Representation learning produces features that generalize across a range of applications, enabling state-of-the-art supervised prediction of mutational effect and secondary structure and improving state-of-the-art features for long-range contact prediction. The abstract from “Language models of protein sequences at the scale of evolution enable accurate structure prediction” is Large language models have recently been shown to develop emergent capabilities with scale, going beyond simple pattern matching to perform higher level reasoning and generate lifelike images and text. While language models trained on protein sequences have been studied at a smaller scale, little is known about what they learn about biology as they are scaled up. In this work we train models up to 15 billion parameters, the largest language models of proteins to be evaluated to date. We find that as models are scaled they learn information enabling the prediction of the three-dimensional structure of a protein at the resolution of individual atoms. We present ESMFold for high accuracy end-to-end atomic level structure prediction directly from the individual sequence of a protein. ESMFold has similar accuracy to AlphaFold2 and RoseTTAFold for sequences with low perplexity that are well understood by the language model. ESMFold inference is an order of magnitude faster than AlphaFold2, enabling exploration of the structural space of metagenomic proteins in practical timescales. Tips: ESM models are trained with a masked language modeling (MLM) objective. The original code can be found here and was was developed by the Fundamental AI Research team at Meta AI. ESM-1b, ESM-1v and ESM-2 were contributed to huggingface by jasonliu and Matt. ESMFold was contributed to huggingface by Matt and Sylvain, with a big thank you to Nikita Smetanin, Roshan Rao and Tom Sercu for their help throughout the process! The HuggingFace port of ESMFold uses portions of the openfold library. The openfold library is licensed under the Apache License 2.0. Documentation resources Text classification task guide Token classification task guide Masked language modeling task guide EsmConfig class transformers.EsmConfig < source > ( vocab_size = None mask_token_id = None pad_token_id = None hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 1026 initializer_range = 0.02 layer_norm_eps = 1e-12 position_embedding_type = 'absolute' use_cache = True emb_layer_norm_before = None token_dropout = False is_folding_model = False esmfold_config = None vocab_list = None **kwargs ) Parameters vocab_size (int, optional) — Vocabulary size of the ESM model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ESMModel. mask_token_id (int, optional) — The index of the mask token in the vocabulary. This must be included in the config because of the “mask-dropout” scaling trick, which will scale the inputs depending on the number of masked tokens. pad_token_id (int, optional) — The index of the padding token in the vocabulary. This must be included in the config because certain parts of the ESM code use this instead of the attention mask. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 1026) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query", "rotary". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. emb_layer_norm_before (bool, optional) — Whether to apply layer normalization after embeddings but before the main stem of the network. token_dropout (bool, defaults to False) — When this is enabled, masked tokens are treated as if they had been dropped out by input dropout. This is the configuration class to store the configuration of a ESMModel. It is used to instantiate a ESM model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ESM facebook/esm-1b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import EsmModel, EsmConfig >>> >>> >>> to_dict < source > ( ) → Dict[str, any] Dictionary of all the attributes that make up this configuration instance, Serializes this instance to a Python dictionary. Override the default to_dict(). EsmTokenizer class transformers.EsmTokenizer < source > ( vocab_file unk_token = '<unk>' cls_token = '<cls>' pad_token = '<pad>' mask_token = '<mask>' eos_token = '<eos>' **kwargs ) Constructs an ESM tokenizer. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) get_special_tokens_mask < source > ( token_ids_0: typing.List token_ids_1: typing.Optional[typing.List] = None already_has_special_tokens: bool = False ) → A list of integers in the range [0, 1] Parameters token_ids_0 (List[int]) — List of ids of the first sequence. token_ids_1 (List[int], optional) — List of ids of the second sequence. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. Returns A list of integers in the range [0, 1] 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — The first tokenized sequence. token_ids_1 (List[int], optional) — The second tokenized sequence. The token type ids. Create the token type IDs corresponding to the sequences passed. What are token type IDs? Should be overridden in a subclass if the model has a special way of building those. save_vocabulary < source > ( save_directory filename_prefix ) EsmModel class transformers.EsmModel < source > ( config add_pooling_layer = True ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ESM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ((batch_size, sequence_length))) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ((batch_size, sequence_length)), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape ((batch_size, sequence_length)), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ((batch_size, sequence_length), hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The EsmModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, EsmModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = EsmModel.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state EsmForMaskedLM class transformers.EsmForMaskedLM < source > ( config ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESM Model with a language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The EsmForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, EsmForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = EsmForMaskedLM.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) EsmForSequenceClassification class transformers.EsmForSequenceClassification < source > ( config ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The EsmForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, EsmForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = EsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = EsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, EsmForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = EsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = EsmForSequenceClassification.from_pretrained( ... "facebook/esm2_t6_8M_UR50D", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss EsmForTokenClassification class transformers.EsmForTokenClassification < source > ( config ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The EsmForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, EsmForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = EsmForTokenClassification.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss EsmForProteinFolding class transformers.EsmForProteinFolding < source > ( config ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESMForProteinFolding is the HuggingFace port of the original ESMFold model. It consists of an ESM-2 “stem” followed by a protein folding “head”, although unlike most other output heads, this “head” is similar in size and runtime to the rest of the model combined! It outputs a dictionary containing predicted structural information about the input protein(s). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: Tensor attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None masking_pattern: typing.Optional[torch.Tensor] = None num_recycles: typing.Optional[int] = None ) → transformers.models.esm.modeling_esmfold.EsmForProteinFoldingOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? masking_pattern (torch.LongTensor of shape (batch_size, sequence_length), optional) — Locations of tokens to mask during training as a form of regularization. Mask values selected in [0, 1]. num_recycles (int, optional, defaults to None) — Number of times to recycle the input sequence. If None, defaults to config.num_recycles. “Recycling” consists of passing the output of the folding trunk back in as input to the trunk. During training, the number of recycles should vary with each batch, to ensure that the model learns to output valid predictions after each recycle. During inference, num_recycles should be set to the highest value that the model was trained with for maximum accuracy. Accordingly, when this value is set to None, config.max_recycles is used. Returns transformers.models.esm.modeling_esmfold.EsmForProteinFoldingOutput or tuple(torch.FloatTensor) A transformers.models.esm.modeling_esmfold.EsmForProteinFoldingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.esm.configuration_esm.EsmConfig'>) and inputs. frames (torch.FloatTensor) — Output frames. sidechain_frames (torch.FloatTensor) — Output sidechain frames. unnormalized_angles (torch.FloatTensor) — Predicted unnormalized backbone and side chain torsion angles. angles (torch.FloatTensor) — Predicted backbone and side chain torsion angles. positions (torch.FloatTensor) — Predicted positions of the backbone and side chain atoms. states (torch.FloatTensor) — Hidden states from the protein folding trunk. s_s (torch.FloatTensor) — Per-residue embeddings derived by concatenating the hidden states of each layer of the ESM-2 LM stem. s_z (torch.FloatTensor) — Pairwise residue embeddings. distogram_logits (torch.FloatTensor) — Input logits to the distogram used to compute residue distances. lm_logits (torch.FloatTensor) — Logits output by the ESM-2 protein language model stem. aatype (torch.FloatTensor) — Input amino acids (AlphaFold2 indices). atom14_atom_exists (torch.FloatTensor) — Whether each atom exists in the atom14 representation. residx_atom14_to_atom37 (torch.FloatTensor) — Mapping between atoms in the atom14 and atom37 representations. residx_atom37_to_atom14 (torch.FloatTensor) — Mapping between atoms in the atom37 and atom14 representations. atom37_atom_exists (torch.FloatTensor) — Whether each atom exists in the atom37 representation. residue_index (torch.FloatTensor) — The index of each residue in the protein chain. Unless internal padding tokens are used, this will just be a sequence of integers from 0 to sequence_length. lddt_head (torch.FloatTensor) — Raw outputs from the lddt head used to compute plddt. plddt (torch.FloatTensor) — Per-residue confidence scores. Regions of low confidence may indicate areas where the model’s prediction is uncertain, or where the protein structure is disordered. ptm_logits (torch.FloatTensor) — Raw logits used for computing ptm. ptm (torch.FloatTensor) — TM-score output representing the model’s high-level confidence in the overall structure. aligned_confidence_probs (torch.FloatTensor) — Per-residue confidence scores for the aligned structure. predicted_aligned_error (torch.FloatTensor) — Predicted error between the model’s prediction and the ground truth. max_predicted_aligned_error (torch.FloatTensor) — Per-sample maximum predicted error. The EsmForProteinFolding forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, EsmForProteinFolding >>> model = EsmForProteinFolding.from_pretrained("facebook/esmfold_v1") >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esmfold_v1") >>> inputs = tokenizer(["MLKNVQVQLV"], return_tensors="pt", add_special_tokens=False) >>> outputs = model(**inputs) >>> folded_positions = outputs.positions TFEsmModel class transformers.TFEsmModel < source > ( *args **kwargs ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ESM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Keras Model subclass. Use it as a regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior. call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). Set to False during training, True during generation A transformers.modeling_tf_outputs.TFBaseModelOutputWithPoolingAndCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The TFEsmModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFEsmModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = TFEsmModel.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFEsmForMaskedLM class transformers.TFEsmForMaskedLM < source > ( *args **kwargs ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESM Model with a language modeling head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Keras Model subclass. Use it as a regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior. call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFEsmForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFEsmForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = TFEsmForMaskedLM.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) TFEsmForSequenceClassification class transformers.TFEsmForSequenceClassification < source > ( *args **kwargs ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Keras Model subclass. Use it as a regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior. call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFEsmForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFEsmForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = TFEsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFEsmForSequenceClassification.from_pretrained("facebook/esm2_t6_8M_UR50D", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFEsmForTokenClassification class transformers.TFEsmForTokenClassification < source > ( *args **kwargs ) Parameters config (EsmConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ESM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Keras Model subclass. Use it as a regular Keras model and refer to the TF/Keras documentation for all matters related to general usage and behavior. call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (EsmConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFEsmForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFEsmForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> model = TFEsmForTokenClassification.from_pretrained("facebook/esm2_t6_8M_UR50D") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss)
https://huggingface.co/docs/transformers/model_doc/flaubert
FlauBERT Overview The FlauBERT model was proposed in the paper FlauBERT: Unsupervised Language Model Pre-training for French by Hang Le et al. It’s a transformer model pretrained using a masked language modeling (MLM) objective (like BERT). The abstract from the paper is the following: Language models have become a key step to achieve state-of-the art results in many different Natural Language Processing (NLP) tasks. Leveraging the huge amount of unlabeled texts nowadays available, they provide an efficient way to pre-train continuous word representations that can be fine-tuned for a downstream task, along with their contextualization at the sentence level. This has been widely demonstrated for English using contextualized representations (Dai and Le, 2015; Peters et al., 2018; Howard and Ruder, 2018; Radford et al., 2018; Devlin et al., 2019; Yang et al., 2019b). In this paper, we introduce and share FlauBERT, a model learned on a very large and heterogeneous French corpus. Models of different sizes are trained using the new CNRS (French National Centre for Scientific Research) Jean Zay supercomputer. We apply our French language models to diverse NLP tasks (text classification, paraphrasing, natural language inference, parsing, word sense disambiguation) and show that most of the time they outperform other pretraining approaches. Different versions of FlauBERT as well as a unified evaluation protocol for the downstream tasks, called FLUE (French Language Understanding Evaluation), are shared to the research community for further reproducible experiments in French NLP. This model was contributed by formiel. The original code can be found here. Tips: Like RoBERTa, without the sentence ordering prediction (so just trained on the MLM objective). Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide FlaubertConfig class transformers.FlaubertConfig < source > ( pre_norm = False layerdrop = 0.0 vocab_size = 30145 emb_dim = 2048 n_layers = 12 n_heads = 16 dropout = 0.1 attention_dropout = 0.1 gelu_activation = True sinusoidal_embeddings = False causal = False asm = False n_langs = 1 use_lang_emb = True max_position_embeddings = 512 embed_init_std = 0.02209708691207961 layer_norm_eps = 1e-12 init_std = 0.02 bos_index = 0 eos_index = 1 pad_index = 2 unk_index = 3 mask_index = 5 is_encoder = True summary_type = 'first' summary_use_proj = True summary_activation = None summary_proj_to_labels = True summary_first_dropout = 0.1 start_n_top = 5 end_n_top = 5 mask_token_id = 0 lang_id = 0 pad_token_id = 2 bos_token_id = 0 **kwargs ) Parameters pre_norm (bool, optional, defaults to False) — Whether to apply the layer normalization before or after the feed forward layer following the attention in each layer (Vaswani et al., Tensor2Tensor for Neural Machine Translation. 2018) layerdrop (float, optional, defaults to 0.0) — Probability to drop layers during training (Fan et al., Reducing Transformer Depth on Demand with Structured Dropout. ICLR 2020) vocab_size (int, optional, defaults to 30145) — Vocabulary size of the FlauBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling FlaubertModel or TFFlaubertModel. emb_dim (int, optional, defaults to 2048) — Dimensionality of the encoder layers and the pooler layer. n_layer (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.1) — The dropout probability for the attention mechanism gelu_activation (bool, optional, defaults to True) — Whether or not to use a gelu activation instead of relu. sinusoidal_embeddings (bool, optional, defaults to False) — Whether or not to use sinusoidal positional embeddings instead of absolute positional embeddings. causal (bool, optional, defaults to False) — Whether or not the model should behave in a causal manner. Causal models use a triangular attention mask in order to only attend to the left-side context instead if a bidirectional context. asm (bool, optional, defaults to False) — Whether or not to use an adaptive log softmax projection layer instead of a linear layer for the prediction layer. n_langs (int, optional, defaults to 1) — The number of languages the model handles. Set to 1 for monolingual models. use_lang_emb (bool, optional, defaults to True) — Whether to use language embeddings. Some models use additional language embeddings, see the multilingual models page for information on how to use them. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). embed_init_std (float, optional, defaults to 2048^-0.5) — The standard deviation of the truncated_normal_initializer for initializing the embedding matrices. init_std (int, optional, defaults to 50257) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices except the embedding matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. bos_index (int, optional, defaults to 0) — The index of the beginning of sentence token in the vocabulary. eos_index (int, optional, defaults to 1) — The index of the end of sentence token in the vocabulary. pad_index (int, optional, defaults to 2) — The index of the padding token in the vocabulary. unk_index (int, optional, defaults to 3) — The index of the unknown token in the vocabulary. mask_index (int, optional, defaults to 5) — The index of the masking token in the vocabulary. is_encoder(bool, optional, defaults to True) — Whether or not the initialized model should be a transformer encoder or decoder as seen in Vaswani et al. summary_type (string, optional, defaults to “first”) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Has to be one of the following options: "last": Take the last token hidden state (like XLNet). "first": Take the first token hidden state (like BERT). "mean": Take the mean of all tokens hidden states. "cls_index": Supply a Tensor of classification token position (like GPT/GPT-2). "attn": Not implemented now, use multi-head attention. summary_use_proj (bool, optional, defaults to True) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Whether or not to add a projection after the vector extraction. summary_activation (str, optional) — Argument used when doing sequence summary. Used in the sequence classification and multiple choice models. Pass "tanh" for a tanh activation to the output, any other value will result in no activation. summary_proj_to_labels (bool, optional, defaults to True) — Used in the sequence classification and multiple choice models. Whether the projection outputs should have config.num_labels or config.hidden_size classes. summary_first_dropout (float, optional, defaults to 0.1) — Used in the sequence classification and multiple choice models. The dropout ratio to be used after the projection and activation. start_n_top (int, optional, defaults to 5) — Used in the SQuAD evaluation script. end_n_top (int, optional, defaults to 5) — Used in the SQuAD evaluation script. mask_token_id (int, optional, defaults to 0) — Model agnostic parameter to identify masked tokens when generating text in an MLM context. lang_id (int, optional, defaults to 1) — The ID of the language used by the model. This parameter is used when generating text in a given language. This is the configuration class to store the configuration of a FlaubertModel or a TFFlaubertModel. It is used to instantiate a FlauBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FlauBERT flaubert/flaubert_base_uncased architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. FlaubertTokenizer class transformers.FlaubertTokenizer < source > ( vocab_file merges_file do_lowercase = False unk_token = '<unk>' bos_token = '<s>' sep_token = '</s>' pad_token = '<pad>' cls_token = '</s>' mask_token = '<special1>' additional_special_tokens = ['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>'] lang2id = None id2lang = None **kwargs ) Parameters vocab_file (str) — Vocabulary file. merges_file (str) — Merges file. do_lowercase (bool, optional, defaults to False) — Controls lower casing. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token that was used during pretraining. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. sep_token (str, optional, defaults to "</s>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "</s>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "<special1>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. additional_special_tokens (List[str], optional, defaults to ["<special0>","<special1>","<special2>","<special3>","<special4>","<special5>","<special6>","<special7>","<special8>","<special9>"]) — List of additional special tokens. lang2id (Dict[str, int], optional) — Dictionary mapping languages string identifiers to their IDs. id2lang (Dict[int, str], optional) — Dictionary mapping language IDs to their string identifiers. Construct a Flaubert tokenizer. Based on Byte-Pair Encoding. The tokenization process is the following: Moses preprocessing and tokenization. Normalizing all inputs text. The arguments special_tokens and the function set_special_tokens, can be used to add additional symbols (like ”classify”) to a vocabulary. The argument do_lowercase controls lower casing (automatically set for pretrained vocabularies). This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM sequence has the following format: single sequence: <s> X </s> pair of sequences: <s> A </s> B </s> Converts a sequence of tokens (string) in a single string. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. FlaubertModel class transformers.FlaubertModel < source > ( config ) forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None lengths: typing.Optional[torch.LongTensor] = None cache: typing.Union[typing.Dict[str, torch.FloatTensor], NoneType] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (torch.LongTensor of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, torch.FloatTensor], optional) — Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaubertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaubertModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertModel.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaubertWithLMHeadModel class transformers.FlaubertWithLMHeadModel < source > ( config ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Flaubert Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None lengths: typing.Optional[torch.Tensor] = None cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (torch.LongTensor of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, torch.FloatTensor], optional) — Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaubertWithLMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaubertWithLMHeadModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertWithLMHeadModel.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer("The capital of France is <special1>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) FlaubertForSequenceClassification class transformers.FlaubertForSequenceClassification < source > ( config ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None lengths: typing.Optional[torch.Tensor] = None cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (torch.LongTensor of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, torch.FloatTensor], optional) — Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaubertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, FlaubertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = FlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, FlaubertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = FlaubertForSequenceClassification.from_pretrained( ... "flaubert/flaubert_base_cased", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss FlaubertForMultipleChoice class transformers.FlaubertForMultipleChoice < source > ( config *inputs **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None lengths: typing.Optional[torch.Tensor] = None cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (torch.LongTensor of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, torch.FloatTensor], optional) — Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaubertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaubertForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertForMultipleChoice.from_pretrained("flaubert/flaubert_base_cased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits FlaubertForTokenClassification class transformers.FlaubertForTokenClassification < source > ( config ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None lengths: typing.Optional[torch.Tensor] = None cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (torch.LongTensor of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, torch.FloatTensor], optional) — Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaubertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaubertForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertForTokenClassification.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss FlaubertForQuestionAnsweringSimple class transformers.FlaubertForQuestionAnsweringSimple < source > ( config ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None lengths: typing.Optional[torch.Tensor] = None cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (torch.LongTensor of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility. Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, torch.FloatTensor], optional) — Dictionary strings to torch.FloatTensor that contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaubertForQuestionAnsweringSimple forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaubertForQuestionAnsweringSimple >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = FlaubertForQuestionAnsweringSimple.from_pretrained("flaubert/flaubert_base_cased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss FlaubertForQuestionAnswering class transformers.FlaubertForQuestionAnswering < source > ( config ) forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None langs: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None lengths: typing.Optional[torch.Tensor] = None cache: typing.Union[typing.Dict[str, torch.Tensor], NoneType] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None is_impossible: typing.Optional[torch.Tensor] = None cls_index: typing.Optional[torch.Tensor] = None p_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.flaubert.modeling_flaubert.FlaubertForQuestionAnsweringOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Returns transformers.models.flaubert.modeling_flaubert.FlaubertForQuestionAnsweringOutput or tuple(torch.FloatTensor) A transformers.models.flaubert.modeling_flaubert.FlaubertForQuestionAnsweringOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. config (FlaubertConfig): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The FlaubertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Base class for outputs of question answering models using a SquadHead. Example: >>> from transformers import XLMTokenizer, XLMForQuestionAnswering >>> import torch >>> tokenizer = XLMTokenizer.from_pretrained("xlm-mlm-en-2048") >>> model = XLMForQuestionAnswering.from_pretrained("xlm-mlm-en-2048") >>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze( ... 0 ... ) >>> start_positions = torch.tensor([1]) >>> end_positions = torch.tensor([3]) >>> outputs = model(input_ids, start_positions=start_positions, end_positions=end_positions) >>> loss = outputs.loss TFFlaubertModel class transformers.TFFlaubertModel < source > ( *args **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Flaubert Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: np.ndarray | tf.Tensor | None = None attention_mask: np.ndarray | tf.Tensor | None = None langs: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None lengths: np.ndarray | tf.Tensor | None = None cache: Optional[Dict[str, tf.Tensor]] = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the language id to language name mapping is in model.config.id2lang (dictionary int to string). See usage examples detailed in the multilingual documentation. token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, tf.Tensor], optional) — Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFlaubertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFlaubertModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = TFFlaubertModel.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFFlaubertWithLMHeadModel class transformers.TFFlaubertWithLMHeadModel < source > ( *args **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Flaubert Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: np.ndarray | tf.Tensor | None = None attention_mask: np.ndarray | tf.Tensor | None = None langs: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None lengths: np.ndarray | tf.Tensor | None = None cache: Optional[Dict[str, tf.Tensor]] = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.models.flaubert.modeling_tf_flaubert.TFFlaubertWithLMHeadModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the language id to language name mapping is in model.config.id2lang (dictionary int to string). See usage examples detailed in the multilingual documentation. token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, tf.Tensor], optional) — Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns transformers.models.flaubert.modeling_tf_flaubert.TFFlaubertWithLMHeadModelOutput or tuple(tf.Tensor) A transformers.models.flaubert.modeling_tf_flaubert.TFFlaubertWithLMHeadModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFlaubertWithLMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFlaubertWithLMHeadModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = TFFlaubertWithLMHeadModel.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits TFFlaubertForSequenceClassification class transformers.TFFlaubertForSequenceClassification < source > ( *args **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None langs: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None lengths: np.ndarray | tf.Tensor | None = None cache: Optional[Dict[str, tf.Tensor]] = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the language id to language name mapping is in model.config.id2lang (dictionary int to string). See usage examples detailed in the multilingual documentation. token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, tf.Tensor], optional) — Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFlaubertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFlaubertForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = TFFlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFFlaubertForSequenceClassification.from_pretrained("flaubert/flaubert_base_cased", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFFlaubertForMultipleChoice class transformers.TFFlaubertForMultipleChoice < source > ( *args **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None langs: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None lengths: np.ndarray | tf.Tensor | None = None cache: Optional[Dict[str, tf.Tensor]] = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the language id to language name mapping is in model.config.id2lang (dictionary int to string). See usage examples detailed in the multilingual documentation. token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, tf.Tensor], optional) — Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFlaubertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFlaubertForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = TFFlaubertForMultipleChoice.from_pretrained("flaubert/flaubert_base_cased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits TFFlaubertForTokenClassification class transformers.TFFlaubertForTokenClassification < source > ( *args **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None langs: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None lengths: np.ndarray | tf.Tensor | None = None cache: Optional[Dict[str, tf.Tensor]] = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the language id to language name mapping is in model.config.id2lang (dictionary int to string). See usage examples detailed in the multilingual documentation. token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, tf.Tensor], optional) — Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFlaubertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFlaubertForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = TFFlaubertForTokenClassification.from_pretrained("flaubert/flaubert_base_cased") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) TFFlaubertForQuestionAnsweringSimple class transformers.TFFlaubertForQuestionAnsweringSimple < source > ( *args **kwargs ) Parameters config (FlaubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Flaubert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None langs: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None lengths: np.ndarray | tf.Tensor | None = None cache: Optional[Dict[str, tf.Tensor]] = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? langs (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — A parallel sequence of tokens to be used to indicate the language of each token in the input. Indices are languages ids which can be obtained from the language names by using two conversion mappings provided in the configuration of the model (only provided for multilingual models). More precisely, the language name to language id mapping is in model.config.lang2id (which is a dictionary string to int) and the language id to language name mapping is in model.config.id2lang (dictionary int to string). See usage examples detailed in the multilingual documentation. token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? lengths (tf.Tensor or Numpy array of shape (batch_size,), optional) — Length of each sentence that can be used to avoid performing attention on padding token indices. You can also use attention_mask for the same result (see above), kept here for compatibility Indices selected in [0, ..., input_ids.size(-1)]: cache (Dict[str, tf.Tensor], optional) — Dictionary string to tf.FloatTensor that contains precomputed hidden states (key and values in the attention blocks) as computed by the model (see cache output below). Can be used to speed up sequential decoding. The dictionary object will be modified in-place during the forward pass to add newly computed hidden-states. head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlaubertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFlaubertForQuestionAnsweringSimple forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFlaubertForQuestionAnsweringSimple >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("flaubert/flaubert_base_cased") >>> model = TFFlaubertForQuestionAnsweringSimple.from_pretrained("flaubert/flaubert_base_cased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss)
https://huggingface.co/docs/transformers/model_doc/flava
FLAVA Overview The FLAVA model was proposed in FLAVA: A Foundational Language And Vision Alignment Model by Amanpreet Singh, Ronghang Hu, Vedanuj Goswami, Guillaume Couairon, Wojciech Galuba, Marcus Rohrbach, and Douwe Kiela and is accepted at CVPR 2022. The paper aims at creating a single unified foundation model which can work across vision, language as well as vision-and-language multimodal tasks. The abstract from the paper is the following: State-of-the-art vision and vision-and-language models rely on large-scale visio-linguistic pretraining for obtaining good performance on a variety of downstream tasks. Generally, such models are often either cross-modal (contrastive) or multi-modal (with earlier fusion) but not both; and they often only target specific modalities or tasks. A promising direction would be to use a single holistic universal model, as a “foundation”, that targets all modalities at once — a true vision and language foundation model should be good at vision tasks, language tasks, and cross- and multi-modal vision and language tasks. We introduce FLAVA as such a model and demonstrate impressive performance on a wide range of 35 tasks spanning these target modalities. This model was contributed by aps. The original code can be found here. FlavaConfig class transformers.FlavaConfig < source > ( image_config: typing.Dict[str, typing.Any] = None text_config: typing.Dict[str, typing.Any] = None multimodal_config: typing.Dict[str, typing.Any] = None image_codebook_config: typing.Dict[str, typing.Any] = None hidden_size: int = 768 layer_norm_eps: float = 1e-12 projection_dim: int = 768 init_codebook: bool = True logit_scale_init_value: float = 2.6592 initializer_range: float = 0.02 ce_ignore_index: int = -100 mim_weight: float = 1.0 mlm_weight: float = 1.0 global_contrastive_weight: float = 1.0 itm_weight: float = 1.0 mmm_image_weight: float = 1.0 mmm_text_weight: float = 1.0 global_backprop_contrastive: bool = True skip_unmasked_multimodal_encoder: bool = True return_loss: bool = True **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize FlavaTextConfig. image_config (dict, optional) — Dictionary of configuration options used to initialize FlavaImageConfig. multimodal_config (dict, optional) — Dictionary of configuration options used to initialize FlavaMultimodalConfig. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. projection_dim (int, optional, defaults to 512) — Dimentionality of text and image projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale paramter. Default is used as per the original FLAVA/CLIP implementation. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. ce_ignore_index (int, optional, defaults to -100) — Cross entropy index to ignore. mim_weight (float, optional, defaults to 1.0) — Weight to be assigned to MIM (Masked Image Modeling) unimodal loss mlm_weight (float, optional, defaults to 1.0) — Weight to be assigned to MLM (Masked Language Modeling) unimodal loss global_contrastive_weight (float, optional, defaults to 1.0) — Weight to be assigned to global contrastive cross-alignment loss. itm_weight (float, optional, defaults to 1.0) — Weight to be assigned to image-text matching multimodal loss. mmm_image_weight (float, optional, defaults to 1.0) — Weight to be assigned to MMM loss’s image part. mmm_text_weight (float, optional, defaults to 1.0) — Weight to be assigned to MMM loss’s text part. global_backprop_contrastive (bool, optional, defaults to True) — Whether to use global backpropgation through all workers in contrastive loss. skip_unmasked_multimodal_encoder (bool, optional, defaults to True) — Whether to skip running unmasked multimodal encoder whose outputs are not used by FLAVA losses. return_loss (bool, optional, defaults to True) — Whether to return loss or not kwargs (optional) — Dictionary of keyword arguments. FlavaConfig is the configuration class to store the configuration of a FlavaModel. It is used to instantiate FLAVA model according to the specified arguments, defining the text model, image model, image codebook and multimodal model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FlavaConfig, FlavaModel, FlavaForPreTraining >>> >>> configuration = FlavaConfig() >>> >>> model = FlavaModel(configuration) >>> model_pre = FlavaForPreTraining(configuration) >>> >>> configuration = model.config >>> configuration_pre = model_pre.config from_configs < source > ( image_config: FlavaImageConfig text_config: FlavaTextConfig multimodal_config: FlavaMultimodalConfig image_codebook_config: FlavaImageCodebookConfig **kwargs ) → FlavaConfig An instance of a configuration object Instantiate a FlavaConfig (or a derived class) from flava text model configuration, flava image model configuration, flava multimodal model and flava codebook model configuration. FlavaTextConfig class transformers.FlavaTextConfig < source > ( vocab_size: int = 30522 type_vocab_size: int = 2 max_position_embeddings: int = 512 position_embedding_type: str = 'absolute' hidden_size: int = 768 num_hidden_layers: int = 12 num_attention_heads: int = 12 intermediate_size: int = 3072 hidden_act: str = 'gelu' hidden_dropout_prob: float = 0.0 attention_probs_dropout_prob: float = 0.0 initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 pad_token_id: int = 0 qkv_bias: bool = True **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling FlavaTextModel. type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling FlavaTextModel. Note that even though text encoder allows token_type_ids’s value as 2, for text-only pretraining and fine-tuning, only 1 is used similar to RoBERTa. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). For VL, max_length passed to model is 77. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values. This is the configuration class to store the configuration of a FlavaTextModel. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FlavaTextConfig, FlavaTextModel >>> >>> configuration = FlavaTextConfig() >>> >>> model = FlavaTextModel(configuration) >>> >>> configuration = model.config FlavaImageConfig class transformers.FlavaImageConfig < source > ( hidden_size: int = 768 num_hidden_layers: int = 12 num_attention_heads: int = 12 intermediate_size: int = 3072 hidden_act: int = 'gelu' hidden_dropout_prob: float = 0.0 attention_probs_dropout_prob: float = 0.0 initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 image_size: int = 224 patch_size: int = 16 num_channels: int = 3 qkv_bias: bool = True mask_token: bool = True vocab_size: int = 8192 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values. mask_token (bool, optional, defaults to True) — Whether to use a mask token or not. Used in MIM (Masked Image Modeling) loss for FLAVA. vocab_size (int, optional, defaults to 8192) — Vocabulary size of the FlavaImageCodebook used in conjunction with FlavaImageModel for MIM (Masked Image Modeling) loss for FLAVA. This is the configuration class to store the configuration of a FlavaImageModel. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FlavaImageConfig, FlavaImageModel >>> >>> configuration = FlavaImageConfig() >>> >>> model = FlavaImageModel(configuration) >>> >>> configuration = model.config FlavaMultimodalConfig class transformers.FlavaMultimodalConfig < source > ( hidden_size: int = 768 num_hidden_layers: int = 6 num_attention_heads: int = 12 intermediate_size: int = 3072 hidden_act: int = 'gelu' hidden_dropout_prob: int = 0.0 attention_probs_dropout_prob: int = 0.0 initializer_range: float = 0.02 layer_norm_eps: float = 1e-12 qkv_bias: bool = True use_cls_token: bool = True **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values. use_cls_token (bool, optional, defaults to True) — Whether to use an extra CLS token for multimodal settings. Usually needed by the FLAVA model. This is the configuration class to store the configuration of a FlavaMultimodalModel. It is used to instantiate an FLAVA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FLAVA facebook/flava-full architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FlavaMultimodalConfig, FlavaMultimodalModel >>> >>> configuration = FlavaMultimodalConfig() >>> >>> model = FlavaMultimodalModel(configuration) >>> >>> configuration = model.config FlavaImageCodebookConfig class transformers.FlavaImageCodebookConfig < source > ( num_groups: int = 4 input_channels: int = 3 num_blocks_per_group: int = 2 hidden_size: int = 256 vocab_size: int = 8192 freeze: int = True initializer_range: float = 0.02 **kwargs ) FlavaProcessor class transformers.FlavaProcessor < source > ( image_processor = None tokenizer = None **kwargs ) Parameters image_processor (FlavaImageProcessor) — The image processor is a required input. tokenizer (BertTokenizerFast) — The tokenizer is a required input. Constructs a FLAVA processor which wraps a FLAVA image processor and a FLAVA tokenizer into a single processor. FlavaProcessor offers all the functionalities of FlavaImageProcessor and BertTokenizerFast. See the __call__() and decode() for more information. This method forwards all its arguments to BertTokenizerFast’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to BertTokenizerFast’s decode(). Please refer to the docstring of this method for more information. FlavaFeatureExtractor FlavaImageProcessor class transformers.FlavaImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BICUBIC: 3> do_center_crop: bool = True crop_size: typing.Dict[str, int] = None do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.Iterable[float], NoneType] = None image_std: typing.Union[float, typing.Iterable[float], NoneType] = None return_image_mask: bool = False input_size_patches: int = 14 total_mask_patches: int = 75 mask_group_min_patches: int = 16 mask_group_max_patches: typing.Optional[int] = None mask_group_min_aspect_ratio: float = 0.3 mask_group_max_aspect_ratio: typing.Optional[float] = None return_codebook_pixels: bool = False codebook_do_resize: bool = True codebook_size: bool = None codebook_resample: int = <Resampling.LANCZOS: 1> codebook_do_center_crop: bool = True codebook_crop_size: int = None codebook_do_rescale: bool = True codebook_rescale_factor: typing.Union[int, float] = 0.00392156862745098 codebook_do_map_pixels: bool = True codebook_do_normalize: bool = True codebook_image_mean: typing.Union[float, typing.Iterable[float], NoneType] = None codebook_image_std: typing.Union[float, typing.Iterable[float], NoneType] = None **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in preprocess. size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}): Size of the image after resizing. Can be overridden by the size parameter in preprocess. resample (PILImageResampling, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by the resample parameter in preprocess. do_center_crop (bool, optional, defaults to True) — Whether to center crop the images. Can be overridden by the do_center_crop parameter in preprocess. crop_size (Dict[str, int] optional, defaults to {"height" -- 224, "width": 224}): Size of image after the center crop (crop_size["height"], crop_size["width"]). Can be overridden by the crop_size parameter in preprocess. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in preprocess. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in preprocess. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in preprocess. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. return_image_mask (bool, optional, defaults to False) — Whether to return the image mask. Can be overridden by the return_image_mask parameter in preprocess. input_size_patches (int, optional, defaults to 14) — Number of patches in the image in height and width direction. 14x14 = 196 total patches. Can be overridden by the input_size_patches parameter in preprocess. total_mask_patches (int, optional, defaults to 75) — Total number of patches that should be masked. Can be overridden by the total_mask_patches parameter in preprocess. mask_group_min_patches (int, optional, defaults to 16) — Minimum number of patches that should be masked. Can be overridden by the mask_group_min_patches parameter in preprocess. mask_group_max_patches (int, optional) — Maximum number of patches that should be masked. Can be overridden by the mask_group_max_patches parameter in preprocess. mask_group_min_aspect_ratio (float, optional, defaults to 0.3) — Minimum aspect ratio of the mask window. Can be overridden by the mask_group_min_aspect_ratio parameter in preprocess. mask_group_max_aspect_ratio (float, optional) — Maximum aspect ratio of the mask window. Can be overridden by the mask_group_max_aspect_ratio parameter in preprocess. codebook_do_resize (bool, optional, defaults to True) — Whether to resize the input for codebook to a certain. Can be overridden by the codebook_do_resize parameter in preprocess. codebook_size. codebook_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}): Resize the input for codebook to the given size. Can be overridden by the codebook_size parameter in preprocess. codebook_resample (PILImageResampling, optional, defaults to PILImageResampling.LANCZOS) — Resampling filter to use if resizing the codebook image. Can be overridden by the codebook_resample parameter in preprocess. codebook_do_center_crop (bool, optional, defaults to True) — Whether to crop the input for codebook at the center. If the input size is smaller than codebook_crop_size along any edge, the image is padded with 0’s and then center cropped. Can be overridden by the codebook_do_center_crop parameter in preprocess. codebook_crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}): Desired output size for codebook input when applying center-cropping. Can be overridden by the codebook_crop_size parameter in preprocess. codebook_do_rescale (bool, optional, defaults to True) — Whether to rescale the input for codebook by the specified scale codebook_rescale_factor. Can be overridden by the codebook_do_rescale parameter in preprocess. codebook_rescale_factor (int or float, optional, defaults to 1/255) — Defines the scale factor to use if rescaling the codebook image. Can be overridden by the codebook_rescale_factor parameter in preprocess. codebook_do_map_pixels (bool, optional, defaults to True) — Whether to map the pixel values of the codebook input to (1 - 2e)x + e. Can be overridden by the codebook_do_map_pixels parameter in preprocess. codebook_do_normalize (bool, optional, defaults to True) — Whether or not to normalize the input for codebook with codebook_image_mean and codebook_image_std. Can be overridden by the codebook_do_normalize parameter in preprocess. codebook_image_mean (Optional[Union[float, Iterable[float]]], optional, defaults to [0, 0, 0]) — The sequence of means for each channel, to be used when normalizing images for codebook. Can be overridden by the codebook_image_mean parameter in preprocess. codebook_image_std (Optional[Union[float, Iterable[float]]], optional, defaults to [0.5, 0.5, 0.5]) — The sequence of standard deviations for each channel, to be used when normalizing images for codebook. Can be overridden by the codebook_image_std parameter in preprocess. Constructs a Flava image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: typing.Optional[bool] = None size: typing.Dict[str, int] = None resample: Resampling = None do_center_crop: typing.Optional[bool] = None crop_size: typing.Union[typing.Dict[str, int], NoneType] = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Optional[float] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_image_mask: typing.Optional[bool] = None input_size_patches: typing.Optional[int] = None total_mask_patches: typing.Optional[int] = None mask_group_min_patches: typing.Optional[int] = None mask_group_max_patches: typing.Optional[int] = None mask_group_min_aspect_ratio: typing.Optional[float] = None mask_group_max_aspect_ratio: typing.Optional[float] = None return_codebook_pixels: typing.Optional[bool] = None codebook_do_resize: typing.Optional[bool] = None codebook_size: typing.Union[typing.Dict[str, int], NoneType] = None codebook_resample: typing.Optional[int] = None codebook_do_center_crop: typing.Optional[bool] = None codebook_crop_size: typing.Union[typing.Dict[str, int], NoneType] = None codebook_do_rescale: typing.Optional[bool] = None codebook_rescale_factor: typing.Optional[float] = None codebook_do_map_pixels: typing.Optional[bool] = None codebook_do_normalize: typing.Optional[bool] = None codebook_image_mean: typing.Optional[typing.Iterable[float]] = None codebook_image_std: typing.Optional[typing.Iterable[float]] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling, Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the center crop. Only has an effect if do_center_crop is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation. return_image_mask (bool, optional, defaults to self.return_image_mask) — Whether to return the image mask. input_size_patches (int, optional, defaults to self.input_size_patches) — Size of the patches to extract from the image. total_mask_patches (int, optional, defaults to self.total_mask_patches) — Total number of patches to extract from the image. mask_group_min_patches (int, optional, defaults to self.mask_group_min_patches) — Minimum number of patches to extract from the image. mask_group_max_patches (int, optional, defaults to self.mask_group_max_patches) — Maximum number of patches to extract from the image. mask_group_min_aspect_ratio (float, optional, defaults to self.mask_group_min_aspect_ratio) — Minimum aspect ratio of the patches to extract from the image. mask_group_max_aspect_ratio (float, optional, defaults to self.mask_group_max_aspect_ratio) — Maximum aspect ratio of the patches to extract from the image. return_codebook_pixels (bool, optional, defaults to self.return_codebook_pixels) — Whether to return the codebook pixels. codebook_do_resize (bool, optional, defaults to self.codebook_do_resize) — Whether to resize the codebook pixels. codebook_size (Dict[str, int], optional, defaults to self.codebook_size) — Size of the codebook pixels. codebook_resample (int, optional, defaults to self.codebook_resample) — Resampling filter to use if resizing the codebook pixels. This can be one of the enum PILImageResampling, Only has an effect if codebook_do_resize is set to True. codebook_do_center_crop (bool, optional, defaults to self.codebook_do_center_crop) — Whether to center crop the codebook pixels. codebook_crop_size (Dict[str, int], optional, defaults to self.codebook_crop_size) — Size of the center crop of the codebook pixels. Only has an effect if codebook_do_center_crop is set to True. codebook_do_rescale (bool, optional, defaults to self.codebook_do_rescale) — Whether to rescale the codebook pixels values between [0 - 1]. codebook_rescale_factor (float, optional, defaults to self.codebook_rescale_factor) — Rescale factor to rescale the codebook pixels by if codebook_do_rescale is set to True. codebook_do_map_pixels (bool, optional, defaults to self.codebook_do_map_pixels) — Whether to map the codebook pixels values. codebook_do_normalize (bool, optional, defaults to self.codebook_do_normalize) — Whether to normalize the codebook pixels. codebook_image_mean (float or List[float], optional, defaults to self.codebook_image_mean) — Codebook pixels mean to normalize the codebook pixels by if codebook_do_normalize is set to True. codebook_image_std (float or List[float], optional, defaults to self.codebook_image_std) — Codebook pixels standard deviation to normalize the codebook pixels by if codebook_do_normalize is set to True. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: ChannelDimension.FIRST: image in (num_channels, height, width) format. ChannelDimension.LAST: image in (height, width, num_channels) format. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. FlavaForPreTraining class transformers.FlavaForPreTraining < source > ( config: FlavaConfig image_codebook: typing.Optional[torch.nn.modules.module.Module] = None ) Parameters config (FlavaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. image_codebook (nn.Module) — If passed, the image codebook will be set to this. Otherwise. it will be initialized using the image_codebook_config defined in the config first as the first parameter. The FLAVA model for pretraining which outputs losses, embeddings, logits and transformer outputs. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None input_ids_masked: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None codebook_pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None image_attention_mask: typing.Optional[torch.Tensor] = None skip_unmasked_multimodal_encoder: bool = None mlm_labels: typing.Optional[torch.Tensor] = None mim_labels: typing.Optional[torch.Tensor] = None itm_labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: bool = True return_dict: typing.Optional[bool] = None return_loss: typing.Optional[bool] = None ) → transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids_masked (torch.LongTensor of shape (batch_size, text_seq_len)) — Indices of input sequence tokens in the vocabulary. These ones are the masked version of the original task to be used with MLM. Indices can be obtained using AutoTokenizer along with DataCollatorForMaskedLanguageModeling. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? input_ids (torch.LongTensor of shape (batch_size, text_seq_len)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, text_seq_len), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details. bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings. image_attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) — Mask to avoid performing attention on padding token indices specifically for images. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? skip_unmasked_multimodal_encoder (bool, optional) — Skip any calculations for multimodal encoder for unmasked inputs. FLAVA pretraining doesn’t need unmasked multimodal embeddings or outputs as of now. mlm_labels (torch.LongTensor of shape (batch_size, text_seq_len), optional) — Labels for computing the left-to-right language and multimodal masked modeling loss (next word prediction). Indices should be in [-100, 0, ..., text_config.vocab_size - 1] (see input_ids docstring). Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., text_config.vocab_size - 1]. mim_labels (torch.LongTensor of shape (batch_size, image_num_patches), optional) — Labels for computing the image and multimodal masked modeling loss. Indices should be in [-100, 0, ..., image_config.vocab_size - 1]. Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., image_config.vocab_size - 1]. If not passed, they are generated automatically using the image codebook assigned to the model. By default, it uses FlavaImageCodebook. See FlavaImageCodebook to understand how to generate mim_labels. itm_labels (torch.LongTensor of shape (batch_size, 1), optional) — Labels for computing the image-text matching loss. 0 means the pairs don’t match and 1 means they match. The pairs with 0 will be skipped for calculation of MMM and global contrastive losses as well. return_loss (bool, optional, default to None) — Whether to return calculated loss or not. attention_mask (torch.FloatTensor of shape (batch_size, text_seq_len), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Examples — Returns transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or tuple(torch.FloatTensor) A transformers.models.flava.modeling_flava.FlavaForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs. loss (torch.FloatTensor, optional, returned when return_loss is True) — Total loss calculated for this model. loss_info (FlavaLosses) — Detailed info for FLAVA Pretraining losses. Check FlavaLosses class description for the information on the keys. image_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of FlavaImageModel. image_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the FlavaImageModel. text_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids are present) — The text embeddings which are basically the pooled output of FlavaTextModel. text_output (BaseModelOutputWithPooling, optional, returned when input_ids are present) — The output of the FlavaTextModel. multimodal_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present and skip_unmasked_multimodal_encoder is None or False) — The multimodal embeddings which are basically the pooled output of FlavaTextModel. multimodal_output (BaseModelOutputWithPooling, returned when input_ids and pixel_values are present and skip_unmasked_multimodal_encoder is None or False) — The output of the FlavaMultimodalModel. image_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of FlavaImageModel. Uses bool_masked_pos to create masked images. image_masked_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the FlavaImageModel. Uses bool_masked_pos to create masked images. text_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids_masked are present) — The text embeddings which are basically the pooled output of FlavaTextModel. text_masked_output (BaseModelOutputWithPooling, optional, returned when input_ids_masked are present) — The output of the FlavaTextModel. multimodal_masked_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present) — The multimodal embeddings which are basically the pooled output of FlavaTextModel. multimodal_masked_output (BaseModelOutputWithPooling, returned when input_ids_masked and pixel_values are present) — The output of the FlavaMultimodalModel. mim_logits (torch.FloatTensor of shape (batch_size, num_image_patches, image_vocab_size) or of shape (total_masked_patches, image_vocab_size) , optional, returned when pixel_values are present and input_ids_masked are not) — The logits for MIM unimodal loss. Uses book_masked_pos to get masked patches. The flattened output is returned when bool_masked_pos has some of the patches masked. mlm_logits (torch.FloatTensor of shape (batch_size, text_seq_length, text_vocab_size) or of shape (total_masked_seq_length, text_vocab_size), optional, returned when input_ids_masked are present and pixel_values are not) — The logits for MLM unimodal loss. The flattened output is returned when input_ids_masked has some of the tokens masked. itm_logits (torch.FloatTensor of shape (batch_size, 2), optional, returned when input_ids_masked and pixel_values are present) — The logits for ITM loss. Note that ITM loss is calculated on masked pairs in FLAVA. mmm_image_logits (torch.FloatTensor of shape (batch_size, num_image_patches, image_vocab_size) or of shape(total_masked_patches, image_vocab_size), optional, returned when pixel_values and input_ids_masked are present) — The logits for MMM image multimodal loss. Uses book_masked_pos to get masked patches. The flattened output is returned when bool_masked_pos has some of the patches masked. mmm_text_logits (torch.FloatTensor of shape (batch_size, text_seq_length, text_vocab_size) or of shape ((total_masked_seq_length, text_vocab_size)), *optional*, returned when pixel_valuesandinput_ids_maskedare present) -- The logits for MMM text multimodal loss. The flattened output is returned wheninput_ids_masked` has some of the tokens masked. contrastive_logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeddings and text_embeddings but passed through FLAVA’s image_projection and text_projection layers respectively. This represents the image-text similarity scores. This is calculated on unmasked images and texts. contrastive_logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeddings and image_embeddings but passed through FLAVA’s text_projection and image_projection layers respectively. This is calculated on unmasked images and texts. The FlavaForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. FlavaModel class transformers.FlavaModel < source > ( config: FlavaConfig ) Parameters config (FlavaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare FLAVA Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None image_attention_mask: typing.Optional[torch.Tensor] = None skip_multimodal_encoder: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: bool = True return_dict: typing.Optional[bool] = None ) → transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details. bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings. input_ids (torch.LongTensor of shape (batch_size, image_num_patches + text_seq_len)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, image_num_patches + text_seq_len), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. skip_multimodal_encoder (bool, optional) — Skip any calculations for multimodal encoder. Useful if multimodal encoding is not going to be used. Returns transformers.models.flava.modeling_flava.FlavaModelOutput or tuple(torch.FloatTensor) A transformers.models.flava.modeling_flava.FlavaModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.flava.configuration_flava.FlavaConfig'>) and inputs. image_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when pixel_values are present) — The image embeddings which are basically the pooled output of FlavaImageModel. image_output (BaseModelOutputWithPooling, optional, returned when pixel_values are present) — The output of the FlavaImageModel. text_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids are present) — The text embeddings which are basically the pooled output of FlavaTextModel. text_output (BaseModelOutputWithPooling, optional, returned when input_ids are present) — The output of the FlavaTextModel. multimodal_embeddings (torch.FloatTensor of shape (batch_size, output_dim), optional, returned when input_ids and pixel_values are present and skip_multimodal_encoder is None or False) — The multimodal embeddings which are basically the pooled output of FlavaTextModel. multimodal_output (BaseModelOutputWithPooling, returned when input_ids and pixel_values are present and skip_multimodal_encoder is None or False) — The output of the FlavaMultimodalModel. The FlavaModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, FlavaModel >>> model = FlavaModel.from_pretrained("facebook/flava-full") >>> processor = AutoProcessor.from_pretrained("facebook/flava-full") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(text=["a photo of a cat"], images=image, return_tensors="pt", padding=True) >>> outputs = model(**inputs) >>> logits_per_image = outputs.contrastive_logits_per_image >>> probs = logits_per_image.softmax(dim=1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, text_seq_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, text_seq_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? attention_mask (torch.FloatTensor of shape (batch_size, text_seq_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. The FlavaModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. get_image_features < source > ( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None interpolate_pos_encoding: typing.Optional[bool] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details. bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings. attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. The FlavaModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. FlavaImageCodebook class transformers.FlavaImageCodebook < source > ( config: FlavaImageCodebookConfig **kwargs: typing.Any ) Parameters config (FlavaImageCodebookConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The FLAVA’s image codebook model inspired from DALL-E’s original encoder. Outputs raw hidden states and can be used to generate image tokens for an image based on DALL-E’s vocab. Used to generate labels for MIM. Use get_codebook_indices to get image tokens for an image. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. get_codebook_indices < source > ( pixel_values: Tensor ) get_codebook_probs < source > ( pixel_values: Tensor ) FlavaTextModel class transformers.FlavaTextModel < source > ( config: FlavaTextConfig add_pooling_layer: bool = True ) Parameters config (FlavaTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare FLAVA Text Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, text_seq_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, text_seq_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? attention_mask (torch.FloatTensor of shape (batch_size, text_seq_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlavaTextConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlavaTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlavaTextModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full") >>> model = FlavaTextModel.from_pretrained("facebook/flava-full") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlavaImageModel class transformers.FlavaImageModel < source > ( config: FlavaImageConfig add_pooling_layer: bool = True ) Parameters config (FlavaImageConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare FLAVA Image Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None interpolate_pos_encoding: typing.Optional[bool] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See FlavaImageProcessor.call() for details. bool_masked_pos (torch.BoolTensor of shape (batch_size, image_num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). interpolate_pos_encoding (bool, optional) — Whether to interpolate the pre-trained position encodings. attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlavaImageConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlavaImageModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, FlavaImageModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/flava-full") >>> model = FlavaImageModel.from_pretrained("facebook/flava-full") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 197, 768] FlavaMultimodalModel class transformers.FlavaMultimodalModel < source > ( config: FlavaMultimodalConfig add_pooling_layer = True ) Parameters config (FlavaMultimodalConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare FLAVA Multimodal Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( hidden_states: Tensor attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters hidden_states (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len, hidden_size)) — The concatenated hidden states of unimodal encoders. attention_mask (torch.FloatTensor of shape (batch_size, image_num_patches + text_seq_len), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FlavaMultimodalConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlavaMultimodalModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlavaMultimodalModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/flava-full") >>> model = FlavaMultimodalModel.from_pretrained("facebook/flava-full") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state
https://huggingface.co/docs/transformers/model_doc/ernie_m
ErnieM Overview The ErnieM model was proposed in ERNIE-M: Enhanced Multilingual Representation by Aligning Cross-lingual Semantics with Monolingual Corpora by Xuan Ouyang, Shuohuan Wang, Chao Pang, Yu Sun, Hao Tian, Hua Wu, Haifeng Wang. The abstract from the paper is the following: Recent studies have demonstrated that pre-trained cross-lingual models achieve impressive performance in downstream cross-lingual tasks. This improvement benefits from learning a large amount of monolingual and parallel corpora. Although it is generally acknowledged that parallel corpora are critical for improving the model performance, existing methods are often constrained by the size of parallel corpora, especially for lowresource languages. In this paper, we propose ERNIE-M, a new training method that encourages the model to align the representation of multiple languages with monolingual corpora, to overcome the constraint that the parallel corpus size places on the model performance. Our key insight is to integrate back-translation into the pre-training process. We generate pseudo-parallel sentence pairs on a monolingual corpus to enable the learning of semantic alignments between different languages, thereby enhancing the semantic modeling of cross-lingual models. Experimental results show that ERNIE-M outperforms existing cross-lingual models and delivers new state-of-the-art results in various cross-lingual downstream tasks. Tips: Ernie-M is a BERT-like model so it is a stacked Transformer Encoder. Instead of using MaskedLM for pretraining (like BERT) the authors used two novel techniques: Cross-attention Masked Language Modeling and Back-translation Masked Language Modeling. For now these two LMHead objectives are not implemented here. It is a multilingual language model. Next Sentence Prediction was not used in pretraining process. This model was contributed by Susnato Dhar. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Multiple choice task guide ErnieMConfig class transformers.ErnieMConfig < source > ( vocab_size: int = 250002 hidden_size: int = 768 num_hidden_layers: int = 12 num_attention_heads: int = 12 intermediate_size: int = 3072 hidden_act: str = 'gelu' hidden_dropout_prob: float = 0.1 attention_probs_dropout_prob: float = 0.1 max_position_embeddings: int = 514 initializer_range: float = 0.02 pad_token_id: int = 1 layer_norm_eps: float = 1e-05 classifier_dropout = None is_decoder = False act_dropout = 0.0 **kwargs ) Parameters vocab_size (int, optional, defaults to 250002) — Vocabulary size of inputs_ids in ErnieMModel. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ErnieMModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the embedding layer, encoder layers and pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to feed-forward layers are firstly projected from hidden_size to intermediate_size, and then projected back to hidden_size. Typically intermediate_size is larger than hidden_size. hidden_act (str, optional, defaults to "gelu") — The non-linear activation function in the feed-forward layer. "gelu", "relu" and any other torch supported activation functions are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. act_dropout (float, optional, defaults to 0.0) — This dropout probability is used in ErnieMEncoderLayer after activation. max_position_embeddings (int, optional, defaults to 512) — The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. layer_norm_eps (float, optional, defaults to 1e-05) — The epsilon used by the layer normalization layers. classifier_dropout (float, optional) — The dropout ratio for the classification head. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the normal initializer for initializing all weight matrices. pad_token_id(int, optional, defaults to 1) — The index of padding token in the token vocabulary. This is the configuration class to store the configuration of a ErnieMModel. It is used to instantiate a Ernie-M model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Ernie-M susnato/ernie-m-base_pytorch architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. A normal_initializer initializes weight matrices as normal distributions. See ErnieMPretrainedModel._init_weights() for how weights are initialized in ErnieMModel. ErnieMTokenizer class transformers.ErnieMTokenizer < source > ( sentencepiece_model_ckpt vocab_file = None do_lower_case = False encoding = 'utf8' unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Parameters sentencepiece_model_file (str) — The file path of sentencepiece model. vocab_file (str, optional) — The file path of the vocabulary. do_lower_case (str, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. unk_token (str, optional, defaults to "[UNK]") — A special token representing the unknown (out-of-vocabulary) token. An unknown token is set to be unk_token inorder to be converted to an ID. sep_token (str, optional, defaults to "[SEP]") — A special token separating two different sentences in the same input. pad_token (str, optional, defaults to "[PAD]") — A special token used to make arrays of tokens the same size for batching purposes. cls_token (str, optional, defaults to "[CLS]") — A special token used for sequence classification. It is the last token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — A special token representing a masked token. This is the token used in the masked language modeling task which the model tries to predict the original unmasked ones. Constructs a Ernie-M tokenizer. It uses the sentencepiece tools to cut the words to sub-words. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input_id with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An ErnieM sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0 token_ids_1 = None already_has_special_tokens = False ) → List[int] Parameters token_ids_0 (List[int]) — List of ids of the first sequence. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (str, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. The list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer encode method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — The first tokenized sequence. token_ids_1 (List[int], optional) — The second tokenized sequence. The token type ids. Create the token type IDs corresponding to the sequences passed. What are token type IDs? Should be overridden in a subclass if the model has a special way of building: those. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) ErnieMModel class transformers.ErnieMModel < source > ( config add_pooling_layer = True ) Parameters config (ErnieMConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ErnieM Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Union[<built-in method tensor of type object at 0x7f683aca6500>, NoneType] = None position_ids: typing.Union[<built-in method tensor of type object at 0x7f683aca6500>, NoneType] = None attention_mask: typing.Union[<built-in method tensor of type object at 0x7f683aca6500>, NoneType] = None head_mask: typing.Union[<built-in method tensor of type object at 0x7f683aca6500>, NoneType] = None inputs_embeds: typing.Union[<built-in method tensor of type object at 0x7f683aca6500>, NoneType] = None past_key_values: typing.Union[typing.Tuple[typing.Tuple[<built-in method tensor of type object at 0x7f683aca6500>]], NoneType] = None use_cache: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieMConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The ErnieMModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieMModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch") >>> model = ErnieMModel.from_pretrained("susnato/ernie-m-base_pytorch") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ErnieMForSequenceClassification class transformers.ErnieMForSequenceClassification < source > ( config ) Parameters config (ErnieMConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ErnieM Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None use_cache: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = True labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieMConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieMForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, ErnieMForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch") >>> model = ErnieMForSequenceClassification.from_pretrained("susnato/ernie-m-base_pytorch") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = ErnieMForSequenceClassification.from_pretrained("susnato/ernie-m-base_pytorch", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, ErnieMForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch") >>> model = ErnieMForSequenceClassification.from_pretrained("susnato/ernie-m-base_pytorch", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = ErnieMForSequenceClassification.from_pretrained( ... "susnato/ernie-m-base_pytorch", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ErnieMForMultipleChoice class transformers.ErnieMForMultipleChoice < source > ( config ) Parameters config (ErnieMConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ErnieM Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = True ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieMConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieMForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieMForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch") >>> model = ErnieMForMultipleChoice.from_pretrained("susnato/ernie-m-base_pytorch") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ErnieMForTokenClassification class transformers.ErnieMForTokenClassification < source > ( config ) Parameters config (ErnieMConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ErnieM Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = True labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieMConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieMForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieMForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch") >>> model = ErnieMForTokenClassification.from_pretrained("susnato/ernie-m-base_pytorch") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss ErnieMForQuestionAnswering class transformers.ErnieMForQuestionAnswering < source > ( config ) Parameters config (ErnieMConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ErnieM Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = True ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ErnieMConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ErnieMForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ErnieMForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("susnato/ernie-m-base_pytorch") >>> model = ErnieMForQuestionAnswering.from_pretrained("susnato/ernie-m-base_pytorch") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss ErnieMForInformationExtraction ( config ) Parameters config (ErnieMConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ErnieMForInformationExtraction is a Ernie-M Model with two linear layer on top of the hidden-states output to compute start_prob and end_prob, designed for Universal Information Extraction. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = True ) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using ErnieMTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for position (index) for computing the start_positions loss. Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) for computing the end_positions loss. Position outside of the sequence are not taken into account for computing the loss. The ErnieMForInformationExtraction forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
https://huggingface.co/docs/transformers/model_doc/auto
Auto Classes In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you are supplying to the from_pretrained() method. AutoClasses are here to do this job for you so that you automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary. Instantiating one of AutoConfig, AutoModel, and AutoTokenizer will directly create a class of the relevant architecture. For instance model = AutoModel.from_pretrained("bert-base-cased") will create a model that is an instance of BertModel. There is one class of AutoModel for each task, and for each backend (PyTorch, TensorFlow, or Flax). Extending the Auto Classes Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a custom class of model NewModel, make sure you have a NewModelConfig then you can add those to the auto classes like this: from transformers import AutoConfig, AutoModel AutoConfig.register("new-model", NewModelConfig) AutoModel.register(NewModelConfig, NewModel) You will then be able to use the auto classes like you would usually do! If your NewModelConfig is a subclass of ~transformer.PretrainedConfig, make sure its model_type attribute is set to the same key you use when registering the config (here "new-model"). Likewise, if your NewModel is a subclass of PreTrainedModel, make sure its config_class attribute is set to the same class you use when registering the model (here NewModelConfig). AutoConfig This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing a configuration file saved using the save_pretrained() method, or the save_pretrained() method, e.g., ./my_model_directory/. A path or url to a saved configuration JSON file, e.g., ./my_model_directory/configuration.json. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final configuration object. If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs(additional keyword arguments, optional) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the configuration classes of the library from a pretrained model configuration. The configuration class to instantiate is selected based on the model_type property of the config object that is loaded, or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertConfig (ALBERT model) align — AlignConfig (ALIGN model) altclip — AltCLIPConfig (AltCLIP model) audio-spectrogram-transformer — ASTConfig (Audio Spectrogram Transformer model) autoformer — AutoformerConfig (Autoformer model) bark — BarkConfig (Bark model) bart — BartConfig (BART model) beit — BeitConfig (BEiT model) bert — BertConfig (BERT model) bert-generation — BertGenerationConfig (Bert Generation model) big_bird — BigBirdConfig (BigBird model) bigbird_pegasus — BigBirdPegasusConfig (BigBird-Pegasus model) biogpt — BioGptConfig (BioGpt model) bit — BitConfig (BiT model) blenderbot — BlenderbotConfig (Blenderbot model) blenderbot-small — BlenderbotSmallConfig (BlenderbotSmall model) blip — BlipConfig (BLIP model) blip-2 — Blip2Config (BLIP-2 model) bloom — BloomConfig (BLOOM model) bridgetower — BridgeTowerConfig (BridgeTower model) bros — BrosConfig (BROS model) camembert — CamembertConfig (CamemBERT model) canine — CanineConfig (CANINE model) chinese_clip — ChineseCLIPConfig (Chinese-CLIP model) clap — ClapConfig (CLAP model) clip — CLIPConfig (CLIP model) clipseg — CLIPSegConfig (CLIPSeg model) code_llama — LlamaConfig (CodeLlama model) codegen — CodeGenConfig (CodeGen model) conditional_detr — ConditionalDetrConfig (Conditional DETR model) convbert — ConvBertConfig (ConvBERT model) convnext — ConvNextConfig (ConvNeXT model) convnextv2 — ConvNextV2Config (ConvNeXTV2 model) cpmant — CpmAntConfig (CPM-Ant model) ctrl — CTRLConfig (CTRL model) cvt — CvtConfig (CvT model) data2vec-audio — Data2VecAudioConfig (Data2VecAudio model) data2vec-text — Data2VecTextConfig (Data2VecText model) data2vec-vision — Data2VecVisionConfig (Data2VecVision model) deberta — DebertaConfig (DeBERTa model) deberta-v2 — DebertaV2Config (DeBERTa-v2 model) decision_transformer — DecisionTransformerConfig (Decision Transformer model) deformable_detr — DeformableDetrConfig (Deformable DETR model) deit — DeiTConfig (DeiT model) deta — DetaConfig (DETA model) detr — DetrConfig (DETR model) dinat — DinatConfig (DiNAT model) dinov2 — Dinov2Config (DINOv2 model) distilbert — DistilBertConfig (DistilBERT model) donut-swin — DonutSwinConfig (DonutSwin model) dpr — DPRConfig (DPR model) dpt — DPTConfig (DPT model) efficientformer — EfficientFormerConfig (EfficientFormer model) efficientnet — EfficientNetConfig (EfficientNet model) electra — ElectraConfig (ELECTRA model) encodec — EncodecConfig (EnCodec model) encoder-decoder — EncoderDecoderConfig (Encoder decoder model) ernie — ErnieConfig (ERNIE model) ernie_m — ErnieMConfig (ErnieM model) esm — EsmConfig (ESM model) falcon — FalconConfig (Falcon model) flaubert — FlaubertConfig (FlauBERT model) flava — FlavaConfig (FLAVA model) fnet — FNetConfig (FNet model) focalnet — FocalNetConfig (FocalNet model) fsmt — FSMTConfig (FairSeq Machine-Translation model) funnel — FunnelConfig (Funnel Transformer model) git — GitConfig (GIT model) glpn — GLPNConfig (GLPN model) gpt-sw3 — GPT2Config (GPT-Sw3 model) gpt2 — GPT2Config (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeConfig (GPTBigCode model) gpt_neo — GPTNeoConfig (GPT Neo model) gpt_neox — GPTNeoXConfig (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseConfig (GPT NeoX Japanese model) gptj — GPTJConfig (GPT-J model) gptsan-japanese — GPTSanJapaneseConfig (GPTSAN-japanese model) graphormer — GraphormerConfig (Graphormer model) groupvit — GroupViTConfig (GroupViT model) hubert — HubertConfig (Hubert model) ibert — IBertConfig (I-BERT model) idefics — IdeficsConfig (IDEFICS model) imagegpt — ImageGPTConfig (ImageGPT model) informer — InformerConfig (Informer model) instructblip — InstructBlipConfig (InstructBLIP model) jukebox — JukeboxConfig (Jukebox model) layoutlm — LayoutLMConfig (LayoutLM model) layoutlmv2 — LayoutLMv2Config (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Config (LayoutLMv3 model) led — LEDConfig (LED model) levit — LevitConfig (LeViT model) lilt — LiltConfig (LiLT model) llama — LlamaConfig (LLaMA model) longformer — LongformerConfig (Longformer model) longt5 — LongT5Config (LongT5 model) luke — LukeConfig (LUKE model) lxmert — LxmertConfig (LXMERT model) m2m_100 — M2M100Config (M2M100 model) marian — MarianConfig (Marian model) markuplm — MarkupLMConfig (MarkupLM model) mask2former — Mask2FormerConfig (Mask2Former model) maskformer — MaskFormerConfig (MaskFormer model) maskformer-swin — MaskFormerSwinConfig (MaskFormerSwin model) mbart — MBartConfig (mBART model) mctct — MCTCTConfig (M-CTC-T model) mega — MegaConfig (MEGA model) megatron-bert — MegatronBertConfig (Megatron-BERT model) mgp-str — MgpstrConfig (MGP-STR model) mistral — MistralConfig (Mistral model) mobilebert — MobileBertConfig (MobileBERT model) mobilenet_v1 — MobileNetV1Config (MobileNetV1 model) mobilenet_v2 — MobileNetV2Config (MobileNetV2 model) mobilevit — MobileViTConfig (MobileViT model) mobilevitv2 — MobileViTV2Config (MobileViTV2 model) mpnet — MPNetConfig (MPNet model) mpt — MptConfig (MPT model) mra — MraConfig (MRA model) mt5 — MT5Config (MT5 model) musicgen — MusicgenConfig (MusicGen model) mvp — MvpConfig (MVP model) nat — NatConfig (NAT model) nezha — NezhaConfig (Nezha model) nllb-moe — NllbMoeConfig (NLLB-MOE model) nougat — VisionEncoderDecoderConfig (Nougat model) nystromformer — NystromformerConfig (Nyströmformer model) oneformer — OneFormerConfig (OneFormer model) open-llama — OpenLlamaConfig (OpenLlama model) openai-gpt — OpenAIGPTConfig (OpenAI GPT model) opt — OPTConfig (OPT model) owlvit — OwlViTConfig (OWL-ViT model) pegasus — PegasusConfig (Pegasus model) pegasus_x — PegasusXConfig (PEGASUS-X model) perceiver — PerceiverConfig (Perceiver model) persimmon — PersimmonConfig (Persimmon model) pix2struct — Pix2StructConfig (Pix2Struct model) plbart — PLBartConfig (PLBart model) poolformer — PoolFormerConfig (PoolFormer model) pop2piano — Pop2PianoConfig (Pop2Piano model) prophetnet — ProphetNetConfig (ProphetNet model) pvt — PvtConfig (PVT model) qdqbert — QDQBertConfig (QDQBert model) rag — RagConfig (RAG model) realm — RealmConfig (REALM model) reformer — ReformerConfig (Reformer model) regnet — RegNetConfig (RegNet model) rembert — RemBertConfig (RemBERT model) resnet — ResNetConfig (ResNet model) retribert — RetriBertConfig (RetriBERT model) roberta — RobertaConfig (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormConfig (RoBERTa-PreLayerNorm model) roc_bert — RoCBertConfig (RoCBert model) roformer — RoFormerConfig (RoFormer model) rwkv — RwkvConfig (RWKV model) sam — SamConfig (SAM model) segformer — SegformerConfig (SegFormer model) sew — SEWConfig (SEW model) sew-d — SEWDConfig (SEW-D model) speech-encoder-decoder — SpeechEncoderDecoderConfig (Speech Encoder decoder model) speech_to_text — Speech2TextConfig (Speech2Text model) speech_to_text_2 — Speech2Text2Config (Speech2Text2 model) speecht5 — SpeechT5Config (SpeechT5 model) splinter — SplinterConfig (Splinter model) squeezebert — SqueezeBertConfig (SqueezeBERT model) swiftformer — SwiftFormerConfig (SwiftFormer model) swin — SwinConfig (Swin Transformer model) swin2sr — Swin2SRConfig (Swin2SR model) swinv2 — Swinv2Config (Swin Transformer V2 model) switch_transformers — SwitchTransformersConfig (SwitchTransformers model) t5 — T5Config (T5 model) table-transformer — TableTransformerConfig (Table Transformer model) tapas — TapasConfig (TAPAS model) time_series_transformer — TimeSeriesTransformerConfig (Time Series Transformer model) timesformer — TimesformerConfig (TimeSformer model) timm_backbone — TimmBackboneConfig (TimmBackbone model) trajectory_transformer — TrajectoryTransformerConfig (Trajectory Transformer model) transfo-xl — TransfoXLConfig (Transformer-XL model) trocr — TrOCRConfig (TrOCR model) tvlt — TvltConfig (TVLT model) umt5 — UMT5Config (UMT5 model) unispeech — UniSpeechConfig (UniSpeech model) unispeech-sat — UniSpeechSatConfig (UniSpeechSat model) upernet — UperNetConfig (UPerNet model) van — VanConfig (VAN model) videomae — VideoMAEConfig (VideoMAE model) vilt — ViltConfig (ViLT model) vision-encoder-decoder — VisionEncoderDecoderConfig (Vision Encoder decoder model) vision-text-dual-encoder — VisionTextDualEncoderConfig (VisionTextDualEncoder model) visual_bert — VisualBertConfig (VisualBERT model) vit — ViTConfig (ViT model) vit_hybrid — ViTHybridConfig (ViT Hybrid model) vit_mae — ViTMAEConfig (ViTMAE model) vit_msn — ViTMSNConfig (ViTMSN model) vitdet — VitDetConfig (VitDet model) vitmatte — VitMatteConfig (ViTMatte model) vits — VitsConfig (VITS model) vivit — VivitConfig (ViViT model) wav2vec2 — Wav2Vec2Config (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerConfig (Wav2Vec2-Conformer model) wavlm — WavLMConfig (WavLM model) whisper — WhisperConfig (Whisper model) xclip — XCLIPConfig (X-CLIP model) xglm — XGLMConfig (XGLM model) xlm — XLMConfig (XLM model) xlm-prophetnet — XLMProphetNetConfig (XLM-ProphetNet model) xlm-roberta — XLMRobertaConfig (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLConfig (XLM-RoBERTa-XL model) xlnet — XLNetConfig (XLNet model) xmod — XmodConfig (X-MOD model) yolos — YolosConfig (YOLOS model) yoso — YosoConfig (YOSO model) Examples: >>> from transformers import AutoConfig >>> >>> config = AutoConfig.from_pretrained("bert-base-uncased") >>> >>> config = AutoConfig.from_pretrained("dbmdz/bert-base-german-cased") >>> >>> config = AutoConfig.from_pretrained("./test/bert_saved_model/") >>> >>> config = AutoConfig.from_pretrained("./test/bert_saved_model/my_configuration.json") >>> >>> config = AutoConfig.from_pretrained("bert-base-uncased", output_attentions=True, foo=False) >>> config.output_attentions True >>> config, unused_kwargs = AutoConfig.from_pretrained( ... "bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True ... ) >>> config.output_attentions True >>> unused_kwargs {'foo': False} register < source > ( model_type config exist_ok = False ) Parameters model_type (str) — The model type like “bert” or “gpt”. config (PretrainedConfig) — The config to register. Register a new configuration for this class. AutoTokenizer This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path *inputs **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing vocabulary files required by the tokenizer, for instance saved using the save_pretrained() method, e.g., ./my_model_directory/. A path or url to a single saved vocabulary file if and only if the tokenizer only requires a single vocabulary file (like Bert or XLNet), e.g.: ./my_model_directory/vocab.txt. (Not applicable to all derived classes) inputs (additional positional arguments, optional) — Will be passed along to the Tokenizer __init__() method. config (PretrainedConfig, optional) — The configuration object used to determine the tokenizer class to instantiate. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. subfolder (str, optional) — In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here. use_fast (bool, optional, defaults to True) — Use a fast Rust-based tokenizer if it is supported for a given model. If a fast tokenizer is not available for a given model, a normal Python-based tokenizer is returned instead. tokenizer_type (str, optional) — Tokenizer type to be loaded. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs (additional keyword arguments, optional) — Will be passed to the Tokenizer __init__() method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the __init__() for more details. Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary. The tokenizer class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertTokenizer or AlbertTokenizerFast (ALBERT model) align — BertTokenizer or BertTokenizerFast (ALIGN model) bark — BertTokenizer or BertTokenizerFast (Bark model) bart — BartTokenizer or BartTokenizerFast (BART model) barthez — BarthezTokenizer or BarthezTokenizerFast (BARThez model) bartpho — BartphoTokenizer (BARTpho model) bert — BertTokenizer or BertTokenizerFast (BERT model) bert-generation — BertGenerationTokenizer (Bert Generation model) bert-japanese — BertJapaneseTokenizer (BertJapanese model) bertweet — BertweetTokenizer (BERTweet model) big_bird — BigBirdTokenizer or BigBirdTokenizerFast (BigBird model) bigbird_pegasus — PegasusTokenizer or PegasusTokenizerFast (BigBird-Pegasus model) biogpt — BioGptTokenizer (BioGpt model) blenderbot — BlenderbotTokenizer or BlenderbotTokenizerFast (Blenderbot model) blenderbot-small — BlenderbotSmallTokenizer (BlenderbotSmall model) blip — BertTokenizer or BertTokenizerFast (BLIP model) blip-2 — GPT2Tokenizer or GPT2TokenizerFast (BLIP-2 model) bloom — BloomTokenizerFast (BLOOM model) bridgetower — RobertaTokenizer or RobertaTokenizerFast (BridgeTower model) bros — BertTokenizer or BertTokenizerFast (BROS model) byt5 — ByT5Tokenizer (ByT5 model) camembert — CamembertTokenizer or CamembertTokenizerFast (CamemBERT model) canine — CanineTokenizer (CANINE model) chinese_clip — BertTokenizer or BertTokenizerFast (Chinese-CLIP model) clap — RobertaTokenizer or RobertaTokenizerFast (CLAP model) clip — CLIPTokenizer or CLIPTokenizerFast (CLIP model) clipseg — CLIPTokenizer or CLIPTokenizerFast (CLIPSeg model) code_llama — CodeLlamaTokenizer or CodeLlamaTokenizerFast (CodeLlama model) codegen — CodeGenTokenizer or CodeGenTokenizerFast (CodeGen model) convbert — ConvBertTokenizer or ConvBertTokenizerFast (ConvBERT model) cpm — CpmTokenizer or CpmTokenizerFast (CPM model) cpmant — CpmAntTokenizer (CPM-Ant model) ctrl — CTRLTokenizer (CTRL model) data2vec-audio — Wav2Vec2CTCTokenizer (Data2VecAudio model) data2vec-text — RobertaTokenizer or RobertaTokenizerFast (Data2VecText model) deberta — DebertaTokenizer or DebertaTokenizerFast (DeBERTa model) deberta-v2 — DebertaV2Tokenizer or DebertaV2TokenizerFast (DeBERTa-v2 model) distilbert — DistilBertTokenizer or DistilBertTokenizerFast (DistilBERT model) dpr — DPRQuestionEncoderTokenizer or DPRQuestionEncoderTokenizerFast (DPR model) electra — ElectraTokenizer or ElectraTokenizerFast (ELECTRA model) ernie — BertTokenizer or BertTokenizerFast (ERNIE model) ernie_m — ErnieMTokenizer (ErnieM model) esm — EsmTokenizer (ESM model) flaubert — FlaubertTokenizer (FlauBERT model) fnet — FNetTokenizer or FNetTokenizerFast (FNet model) fsmt — FSMTTokenizer (FairSeq Machine-Translation model) funnel — FunnelTokenizer or FunnelTokenizerFast (Funnel Transformer model) git — BertTokenizer or BertTokenizerFast (GIT model) gpt-sw3 — GPTSw3Tokenizer (GPT-Sw3 model) gpt2 — GPT2Tokenizer or GPT2TokenizerFast (OpenAI GPT-2 model) gpt_bigcode — GPT2Tokenizer or GPT2TokenizerFast (GPTBigCode model) gpt_neo — GPT2Tokenizer or GPT2TokenizerFast (GPT Neo model) gpt_neox — GPTNeoXTokenizerFast (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseTokenizer (GPT NeoX Japanese model) gptj — GPT2Tokenizer or GPT2TokenizerFast (GPT-J model) gptsan-japanese — GPTSanJapaneseTokenizer (GPTSAN-japanese model) groupvit — CLIPTokenizer or CLIPTokenizerFast (GroupViT model) herbert — HerbertTokenizer or HerbertTokenizerFast (HerBERT model) hubert — Wav2Vec2CTCTokenizer (Hubert model) ibert — RobertaTokenizer or RobertaTokenizerFast (I-BERT model) idefics — LlamaTokenizerFast (IDEFICS model) instructblip — GPT2Tokenizer or GPT2TokenizerFast (InstructBLIP model) jukebox — JukeboxTokenizer (Jukebox model) layoutlm — LayoutLMTokenizer or LayoutLMTokenizerFast (LayoutLM model) layoutlmv2 — LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LayoutLMv3 model) layoutxlm — LayoutXLMTokenizer or LayoutXLMTokenizerFast (LayoutXLM model) led — LEDTokenizer or LEDTokenizerFast (LED model) lilt — LayoutLMv3Tokenizer or LayoutLMv3TokenizerFast (LiLT model) llama — LlamaTokenizer or LlamaTokenizerFast (LLaMA model) longformer — LongformerTokenizer or LongformerTokenizerFast (Longformer model) longt5 — T5Tokenizer or T5TokenizerFast (LongT5 model) luke — LukeTokenizer (LUKE model) lxmert — LxmertTokenizer or LxmertTokenizerFast (LXMERT model) m2m_100 — M2M100Tokenizer (M2M100 model) marian — MarianTokenizer (Marian model) mbart — MBartTokenizer or MBartTokenizerFast (mBART model) mbart50 — MBart50Tokenizer or MBart50TokenizerFast (mBART-50 model) mega — RobertaTokenizer or RobertaTokenizerFast (MEGA model) megatron-bert — BertTokenizer or BertTokenizerFast (Megatron-BERT model) mgp-str — MgpstrTokenizer (MGP-STR model) mistral — LlamaTokenizer or LlamaTokenizerFast (Mistral model) mluke — MLukeTokenizer (mLUKE model) mobilebert — MobileBertTokenizer or MobileBertTokenizerFast (MobileBERT model) mpnet — MPNetTokenizer or MPNetTokenizerFast (MPNet model) mpt — GPTNeoXTokenizerFast (MPT model) mra — RobertaTokenizer or RobertaTokenizerFast (MRA model) mt5 — MT5Tokenizer or MT5TokenizerFast (MT5 model) musicgen — T5Tokenizer or T5TokenizerFast (MusicGen model) mvp — MvpTokenizer or MvpTokenizerFast (MVP model) nezha — BertTokenizer or BertTokenizerFast (Nezha model) nllb — NllbTokenizer or NllbTokenizerFast (NLLB model) nllb-moe — NllbTokenizer or NllbTokenizerFast (NLLB-MOE model) nystromformer — AlbertTokenizer or AlbertTokenizerFast (Nyströmformer model) oneformer — CLIPTokenizer or CLIPTokenizerFast (OneFormer model) openai-gpt — OpenAIGPTTokenizer or OpenAIGPTTokenizerFast (OpenAI GPT model) opt — GPT2Tokenizer or GPT2TokenizerFast (OPT model) owlvit — CLIPTokenizer or CLIPTokenizerFast (OWL-ViT model) pegasus — PegasusTokenizer or PegasusTokenizerFast (Pegasus model) pegasus_x — PegasusTokenizer or PegasusTokenizerFast (PEGASUS-X model) perceiver — PerceiverTokenizer (Perceiver model) persimmon — LlamaTokenizer or LlamaTokenizerFast (Persimmon model) phobert — PhobertTokenizer (PhoBERT model) pix2struct — T5Tokenizer or T5TokenizerFast (Pix2Struct model) plbart — PLBartTokenizer (PLBart model) prophetnet — ProphetNetTokenizer (ProphetNet model) qdqbert — BertTokenizer or BertTokenizerFast (QDQBert model) rag — RagTokenizer (RAG model) realm — RealmTokenizer or RealmTokenizerFast (REALM model) reformer — ReformerTokenizer or ReformerTokenizerFast (Reformer model) rembert — RemBertTokenizer or RemBertTokenizerFast (RemBERT model) retribert — RetriBertTokenizer or RetriBertTokenizerFast (RetriBERT model) roberta — RobertaTokenizer or RobertaTokenizerFast (RoBERTa model) roberta-prelayernorm — RobertaTokenizer or RobertaTokenizerFast (RoBERTa-PreLayerNorm model) roc_bert — RoCBertTokenizer (RoCBert model) roformer — RoFormerTokenizer or RoFormerTokenizerFast (RoFormer model) rwkv — GPTNeoXTokenizerFast (RWKV model) speech_to_text — Speech2TextTokenizer (Speech2Text model) speech_to_text_2 — Speech2Text2Tokenizer (Speech2Text2 model) speecht5 — SpeechT5Tokenizer (SpeechT5 model) splinter — SplinterTokenizer or SplinterTokenizerFast (Splinter model) squeezebert — SqueezeBertTokenizer or SqueezeBertTokenizerFast (SqueezeBERT model) switch_transformers — T5Tokenizer or T5TokenizerFast (SwitchTransformers model) t5 — T5Tokenizer or T5TokenizerFast (T5 model) tapas — TapasTokenizer (TAPAS model) tapex — TapexTokenizer (TAPEX model) transfo-xl — TransfoXLTokenizer (Transformer-XL model) umt5 — T5Tokenizer or T5TokenizerFast (UMT5 model) vilt — BertTokenizer or BertTokenizerFast (ViLT model) visual_bert — BertTokenizer or BertTokenizerFast (VisualBERT model) vits — VitsTokenizer (VITS model) wav2vec2 — Wav2Vec2CTCTokenizer (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2CTCTokenizer (Wav2Vec2-Conformer model) wav2vec2_phoneme — Wav2Vec2PhonemeCTCTokenizer (Wav2Vec2Phoneme model) whisper — WhisperTokenizer or WhisperTokenizerFast (Whisper model) xclip — CLIPTokenizer or CLIPTokenizerFast (X-CLIP model) xglm — XGLMTokenizer or XGLMTokenizerFast (XGLM model) xlm — XLMTokenizer (XLM model) xlm-prophetnet — XLMProphetNetTokenizer (XLM-ProphetNet model) xlm-roberta — XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa-XL model) xlnet — XLNetTokenizer or XLNetTokenizerFast (XLNet model) xmod — XLMRobertaTokenizer or XLMRobertaTokenizerFast (X-MOD model) yoso — AlbertTokenizer or AlbertTokenizerFast (YOSO model) Examples: >>> from transformers import AutoTokenizer >>> >>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") >>> >>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased") >>> >>> >>> >>> tokenizer = AutoTokenizer.from_pretrained("roberta-base", add_prefix_space=True) register < source > ( config_class slow_tokenizer_class = None fast_tokenizer_class = None exist_ok = False ) Parameters config_class (PretrainedConfig) — The configuration corresponding to the model to register. slow_tokenizer_class (PretrainedTokenizer, optional) — The slow tokenizer to register. fast_tokenizer_class (PretrainedTokenizerFast, optional) — The fast tokenizer to register. Register a new tokenizer in this mapping. AutoFeatureExtractor This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the AutoFeatureExtractor.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — This can be either: a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/. a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final feature extractor object. If True, then this functions returns a Tuple(feature_extractor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of kwargs which has not been used to update feature_extractor and is otherwise ignored. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary. The feature extractor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: audio-spectrogram-transformer — ASTFeatureExtractor (Audio Spectrogram Transformer model) beit — BeitFeatureExtractor (BEiT model) chinese_clip — ChineseCLIPFeatureExtractor (Chinese-CLIP model) clap — ClapFeatureExtractor (CLAP model) clip — CLIPFeatureExtractor (CLIP model) clipseg — ViTFeatureExtractor (CLIPSeg model) conditional_detr — ConditionalDetrFeatureExtractor (Conditional DETR model) convnext — ConvNextFeatureExtractor (ConvNeXT model) cvt — ConvNextFeatureExtractor (CvT model) data2vec-audio — Wav2Vec2FeatureExtractor (Data2VecAudio model) data2vec-vision — BeitFeatureExtractor (Data2VecVision model) deformable_detr — DeformableDetrFeatureExtractor (Deformable DETR model) deit — DeiTFeatureExtractor (DeiT model) detr — DetrFeatureExtractor (DETR model) dinat — ViTFeatureExtractor (DiNAT model) donut-swin — DonutFeatureExtractor (DonutSwin model) dpt — DPTFeatureExtractor (DPT model) encodec — EncodecFeatureExtractor (EnCodec model) flava — FlavaFeatureExtractor (FLAVA model) glpn — GLPNFeatureExtractor (GLPN model) groupvit — CLIPFeatureExtractor (GroupViT model) hubert — Wav2Vec2FeatureExtractor (Hubert model) imagegpt — ImageGPTFeatureExtractor (ImageGPT model) layoutlmv2 — LayoutLMv2FeatureExtractor (LayoutLMv2 model) layoutlmv3 — LayoutLMv3FeatureExtractor (LayoutLMv3 model) levit — LevitFeatureExtractor (LeViT model) maskformer — MaskFormerFeatureExtractor (MaskFormer model) mctct — MCTCTFeatureExtractor (M-CTC-T model) mobilenet_v1 — MobileNetV1FeatureExtractor (MobileNetV1 model) mobilenet_v2 — MobileNetV2FeatureExtractor (MobileNetV2 model) mobilevit — MobileViTFeatureExtractor (MobileViT model) nat — ViTFeatureExtractor (NAT model) owlvit — OwlViTFeatureExtractor (OWL-ViT model) perceiver — PerceiverFeatureExtractor (Perceiver model) poolformer — PoolFormerFeatureExtractor (PoolFormer model) pop2piano — Pop2PianoFeatureExtractor (Pop2Piano model) regnet — ConvNextFeatureExtractor (RegNet model) resnet — ConvNextFeatureExtractor (ResNet model) segformer — SegformerFeatureExtractor (SegFormer model) sew — Wav2Vec2FeatureExtractor (SEW model) sew-d — Wav2Vec2FeatureExtractor (SEW-D model) speech_to_text — Speech2TextFeatureExtractor (Speech2Text model) speecht5 — SpeechT5FeatureExtractor (SpeechT5 model) swiftformer — ViTFeatureExtractor (SwiftFormer model) swin — ViTFeatureExtractor (Swin Transformer model) swinv2 — ViTFeatureExtractor (Swin Transformer V2 model) table-transformer — DetrFeatureExtractor (Table Transformer model) timesformer — VideoMAEFeatureExtractor (TimeSformer model) tvlt — TvltFeatureExtractor (TVLT model) unispeech — Wav2Vec2FeatureExtractor (UniSpeech model) unispeech-sat — Wav2Vec2FeatureExtractor (UniSpeechSat model) van — ConvNextFeatureExtractor (VAN model) videomae — VideoMAEFeatureExtractor (VideoMAE model) vilt — ViltFeatureExtractor (ViLT model) vit — ViTFeatureExtractor (ViT model) vit_mae — ViTFeatureExtractor (ViTMAE model) vit_msn — ViTFeatureExtractor (ViTMSN model) wav2vec2 — Wav2Vec2FeatureExtractor (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2FeatureExtractor (Wav2Vec2-Conformer model) wavlm — Wav2Vec2FeatureExtractor (WavLM model) whisper — WhisperFeatureExtractor (Whisper model) xclip — CLIPFeatureExtractor (X-CLIP model) yolos — YolosFeatureExtractor (YOLOS model) Passing token=True is required when you want to use a private model. Examples: >>> from transformers import AutoFeatureExtractor >>> >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h") >>> >>> ( config_class feature_extractor_class exist_ok = False ) Parameters config_class (PretrainedConfig) — The configuration corresponding to the model to register. feature_extractor_class (FeatureExtractorMixin) — The feature extractor to register. Register a new feature extractor for this class. AutoImageProcessor class transformers.AutoImageProcessor < source > ( ) This is a generic image processor class that will be instantiated as one of the image processor classes of the library when created with the AutoImageProcessor.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — This can be either: a string, the model id of a pretrained image_processor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. a path to a directory containing a image processor file saved using the save_pretrained() method, e.g., ./my_model_directory/. a path or url to a saved image processor JSON file, e.g., ./my_model_directory/preprocessor_config.json. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model image processor should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the image processor files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final image processor object. If True, then this functions returns a Tuple(image_processor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not image processor attributes: i.e., the part of kwargs which has not been used to update image_processor and is otherwise ignored. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are image processor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not image processor attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the image processor classes of the library from a pretrained model vocabulary. The image processor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: align — EfficientNetImageProcessor (ALIGN model) beit — BeitImageProcessor (BEiT model) bit — BitImageProcessor (BiT model) blip — BlipImageProcessor (BLIP model) blip-2 — BlipImageProcessor (BLIP-2 model) bridgetower — BridgeTowerImageProcessor (BridgeTower model) chinese_clip — ChineseCLIPImageProcessor (Chinese-CLIP model) clip — CLIPImageProcessor (CLIP model) clipseg — ViTImageProcessor (CLIPSeg model) conditional_detr — ConditionalDetrImageProcessor (Conditional DETR model) convnext — ConvNextImageProcessor (ConvNeXT model) convnextv2 — ConvNextImageProcessor (ConvNeXTV2 model) cvt — ConvNextImageProcessor (CvT model) data2vec-vision — BeitImageProcessor (Data2VecVision model) deformable_detr — DeformableDetrImageProcessor (Deformable DETR model) deit — DeiTImageProcessor (DeiT model) deta — DetaImageProcessor (DETA model) detr — DetrImageProcessor (DETR model) dinat — ViTImageProcessor (DiNAT model) dinov2 — BitImageProcessor (DINOv2 model) donut-swin — DonutImageProcessor (DonutSwin model) dpt — DPTImageProcessor (DPT model) efficientformer — EfficientFormerImageProcessor (EfficientFormer model) efficientnet — EfficientNetImageProcessor (EfficientNet model) flava — FlavaImageProcessor (FLAVA model) focalnet — BitImageProcessor (FocalNet model) git — CLIPImageProcessor (GIT model) glpn — GLPNImageProcessor (GLPN model) groupvit — CLIPImageProcessor (GroupViT model) idefics — IdeficsImageProcessor (IDEFICS model) imagegpt — ImageGPTImageProcessor (ImageGPT model) instructblip — BlipImageProcessor (InstructBLIP model) layoutlmv2 — LayoutLMv2ImageProcessor (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ImageProcessor (LayoutLMv3 model) levit — LevitImageProcessor (LeViT model) mask2former — Mask2FormerImageProcessor (Mask2Former model) maskformer — MaskFormerImageProcessor (MaskFormer model) mgp-str — ViTImageProcessor (MGP-STR model) mobilenet_v1 — MobileNetV1ImageProcessor (MobileNetV1 model) mobilenet_v2 — MobileNetV2ImageProcessor (MobileNetV2 model) mobilevit — MobileViTImageProcessor (MobileViT model) mobilevitv2 — MobileViTImageProcessor (MobileViTV2 model) nat — ViTImageProcessor (NAT model) nougat — NougatImageProcessor (Nougat model) oneformer — OneFormerImageProcessor (OneFormer model) owlvit — OwlViTImageProcessor (OWL-ViT model) perceiver — PerceiverImageProcessor (Perceiver model) pix2struct — Pix2StructImageProcessor (Pix2Struct model) poolformer — PoolFormerImageProcessor (PoolFormer model) pvt — PvtImageProcessor (PVT model) regnet — ConvNextImageProcessor (RegNet model) resnet — ConvNextImageProcessor (ResNet model) sam — SamImageProcessor (SAM model) segformer — SegformerImageProcessor (SegFormer model) swiftformer — ViTImageProcessor (SwiftFormer model) swin — ViTImageProcessor (Swin Transformer model) swin2sr — Swin2SRImageProcessor (Swin2SR model) swinv2 — ViTImageProcessor (Swin Transformer V2 model) table-transformer — DetrImageProcessor (Table Transformer model) timesformer — VideoMAEImageProcessor (TimeSformer model) tvlt — TvltImageProcessor (TVLT model) upernet — SegformerImageProcessor (UPerNet model) van — ConvNextImageProcessor (VAN model) videomae — VideoMAEImageProcessor (VideoMAE model) vilt — ViltImageProcessor (ViLT model) vit — ViTImageProcessor (ViT model) vit_hybrid — ViTHybridImageProcessor (ViT Hybrid model) vit_mae — ViTImageProcessor (ViTMAE model) vit_msn — ViTImageProcessor (ViTMSN model) vitmatte — VitMatteImageProcessor (ViTMatte model) xclip — CLIPImageProcessor (X-CLIP model) yolos — YolosImageProcessor (YOLOS model) Passing token=True is required when you want to use a private model. Examples: >>> from transformers import AutoImageProcessor >>> >>> image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k") >>> >>> register < source > ( config_class image_processor_class exist_ok = False ) Parameters config_class (PretrainedConfig) — The configuration corresponding to the model to register. image_processor_class (ImageProcessingMixin) — The image processor to register. Register a new image processor for this class. AutoProcessor This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the AutoProcessor.from_pretrained() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( pretrained_model_name_or_path **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — This can be either: a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. a path to a directory containing a processor files saved using the save_pretrained() method, e.g., ./my_model_directory/. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final feature extractor object. If True, then this functions returns a Tuple(feature_extractor, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of kwargs which has not been used to update feature_extractor and is otherwise ignored. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by the return_unused_kwargs keyword parameter. Instantiate one of the processor classes of the library from a pretrained model vocabulary. The processor class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible): align — AlignProcessor (ALIGN model) altclip — AltCLIPProcessor (AltCLIP model) bark — BarkProcessor (Bark model) blip — BlipProcessor (BLIP model) blip-2 — Blip2Processor (BLIP-2 model) bridgetower — BridgeTowerProcessor (BridgeTower model) chinese_clip — ChineseCLIPProcessor (Chinese-CLIP model) clap — ClapProcessor (CLAP model) clip — CLIPProcessor (CLIP model) clipseg — CLIPSegProcessor (CLIPSeg model) flava — FlavaProcessor (FLAVA model) git — GitProcessor (GIT model) groupvit — CLIPProcessor (GroupViT model) hubert — Wav2Vec2Processor (Hubert model) idefics — IdeficsProcessor (IDEFICS model) instructblip — InstructBlipProcessor (InstructBLIP model) layoutlmv2 — LayoutLMv2Processor (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Processor (LayoutLMv3 model) markuplm — MarkupLMProcessor (MarkupLM model) mctct — MCTCTProcessor (M-CTC-T model) mgp-str — MgpstrProcessor (MGP-STR model) oneformer — OneFormerProcessor (OneFormer model) owlvit — OwlViTProcessor (OWL-ViT model) pix2struct — Pix2StructProcessor (Pix2Struct model) pop2piano — Pop2PianoProcessor (Pop2Piano model) sam — SamProcessor (SAM model) sew — Wav2Vec2Processor (SEW model) sew-d — Wav2Vec2Processor (SEW-D model) speech_to_text — Speech2TextProcessor (Speech2Text model) speech_to_text_2 — Speech2Text2Processor (Speech2Text2 model) speecht5 — SpeechT5Processor (SpeechT5 model) trocr — TrOCRProcessor (TrOCR model) tvlt — TvltProcessor (TVLT model) unispeech — Wav2Vec2Processor (UniSpeech model) unispeech-sat — Wav2Vec2Processor (UniSpeechSat model) vilt — ViltProcessor (ViLT model) vision-text-dual-encoder — VisionTextDualEncoderProcessor (VisionTextDualEncoder model) wav2vec2 — Wav2Vec2Processor (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2Processor (Wav2Vec2-Conformer model) wavlm — Wav2Vec2Processor (WavLM model) whisper — WhisperProcessor (Whisper model) xclip — XCLIPProcessor (X-CLIP model) Passing token=True is required when you want to use a private model. Examples: >>> from transformers import AutoProcessor >>> >>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h") >>> >>> register < source > ( config_class processor_class exist_ok = False ) Parameters config_class (PretrainedConfig) — The configuration corresponding to the model to register. processor_class (FeatureExtractorMixin) — The processor to register. Register a new processor for this class. Generic model classes The following auto classes are available for instantiating a base model class without a specific head. AutoModel class transformers.AutoModel < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: ASTConfig configuration class: ASTModel (Audio Spectrogram Transformer model) AlbertConfig configuration class: AlbertModel (ALBERT model) AlignConfig configuration class: AlignModel (ALIGN model) AltCLIPConfig configuration class: AltCLIPModel (AltCLIP model) AutoformerConfig configuration class: AutoformerModel (Autoformer model) BarkConfig configuration class: BarkModel (Bark model) BartConfig configuration class: BartModel (BART model) BeitConfig configuration class: BeitModel (BEiT model) BertConfig configuration class: BertModel (BERT model) BertGenerationConfig configuration class: BertGenerationEncoder (Bert Generation model) BigBirdConfig configuration class: BigBirdModel (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusModel (BigBird-Pegasus model) BioGptConfig configuration class: BioGptModel (BioGpt model) BitConfig configuration class: BitModel (BiT model) BlenderbotConfig configuration class: BlenderbotModel (Blenderbot model) BlenderbotSmallConfig configuration class: BlenderbotSmallModel (BlenderbotSmall model) Blip2Config configuration class: Blip2Model (BLIP-2 model) BlipConfig configuration class: BlipModel (BLIP model) BloomConfig configuration class: BloomModel (BLOOM model) BridgeTowerConfig configuration class: BridgeTowerModel (BridgeTower model) BrosConfig configuration class: BrosModel (BROS model) CLIPConfig configuration class: CLIPModel (CLIP model) CLIPSegConfig configuration class: CLIPSegModel (CLIPSeg model) CTRLConfig configuration class: CTRLModel (CTRL model) CamembertConfig configuration class: CamembertModel (CamemBERT model) CanineConfig configuration class: CanineModel (CANINE model) ChineseCLIPConfig configuration class: ChineseCLIPModel (Chinese-CLIP model) ClapConfig configuration class: ClapModel (CLAP model) CodeGenConfig configuration class: CodeGenModel (CodeGen model) ConditionalDetrConfig configuration class: ConditionalDetrModel (Conditional DETR model) ConvBertConfig configuration class: ConvBertModel (ConvBERT model) ConvNextConfig configuration class: ConvNextModel (ConvNeXT model) ConvNextV2Config configuration class: ConvNextV2Model (ConvNeXTV2 model) CpmAntConfig configuration class: CpmAntModel (CPM-Ant model) CvtConfig configuration class: CvtModel (CvT model) DPRConfig configuration class: DPRQuestionEncoder (DPR model) DPTConfig configuration class: DPTModel (DPT model) Data2VecAudioConfig configuration class: Data2VecAudioModel (Data2VecAudio model) Data2VecTextConfig configuration class: Data2VecTextModel (Data2VecText model) Data2VecVisionConfig configuration class: Data2VecVisionModel (Data2VecVision model) DebertaConfig configuration class: DebertaModel (DeBERTa model) DebertaV2Config configuration class: DebertaV2Model (DeBERTa-v2 model) DecisionTransformerConfig configuration class: DecisionTransformerModel (Decision Transformer model) DeformableDetrConfig configuration class: DeformableDetrModel (Deformable DETR model) DeiTConfig configuration class: DeiTModel (DeiT model) DetaConfig configuration class: DetaModel (DETA model) DetrConfig configuration class: DetrModel (DETR model) DinatConfig configuration class: DinatModel (DiNAT model) Dinov2Config configuration class: Dinov2Model (DINOv2 model) DistilBertConfig configuration class: DistilBertModel (DistilBERT model) DonutSwinConfig configuration class: DonutSwinModel (DonutSwin model) EfficientFormerConfig configuration class: EfficientFormerModel (EfficientFormer model) EfficientNetConfig configuration class: EfficientNetModel (EfficientNet model) ElectraConfig configuration class: ElectraModel (ELECTRA model) EncodecConfig configuration class: EncodecModel (EnCodec model) ErnieConfig configuration class: ErnieModel (ERNIE model) ErnieMConfig configuration class: ErnieMModel (ErnieM model) EsmConfig configuration class: EsmModel (ESM model) FNetConfig configuration class: FNetModel (FNet model) FSMTConfig configuration class: FSMTModel (FairSeq Machine-Translation model) FalconConfig configuration class: FalconModel (Falcon model) FlaubertConfig configuration class: FlaubertModel (FlauBERT model) FlavaConfig configuration class: FlavaModel (FLAVA model) FocalNetConfig configuration class: FocalNetModel (FocalNet model) FunnelConfig configuration class: FunnelModel or FunnelBaseModel (Funnel Transformer model) GLPNConfig configuration class: GLPNModel (GLPN model) GPT2Config configuration class: GPT2Model (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeModel (GPTBigCode model) GPTJConfig configuration class: GPTJModel (GPT-J model) GPTNeoConfig configuration class: GPTNeoModel (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXModel (GPT NeoX model) GPTNeoXJapaneseConfig configuration class: GPTNeoXJapaneseModel (GPT NeoX Japanese model) GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) GitConfig configuration class: GitModel (GIT model) GraphormerConfig configuration class: GraphormerModel (Graphormer model) GroupViTConfig configuration class: GroupViTModel (GroupViT model) HubertConfig configuration class: HubertModel (Hubert model) IBertConfig configuration class: IBertModel (I-BERT model) IdeficsConfig configuration class: IdeficsModel (IDEFICS model) ImageGPTConfig configuration class: ImageGPTModel (ImageGPT model) InformerConfig configuration class: InformerModel (Informer model) JukeboxConfig configuration class: JukeboxModel (Jukebox model) LEDConfig configuration class: LEDModel (LED model) LayoutLMConfig configuration class: LayoutLMModel (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2Model (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3Model (LayoutLMv3 model) LevitConfig configuration class: LevitModel (LeViT model) LiltConfig configuration class: LiltModel (LiLT model) LlamaConfig configuration class: LlamaModel (LLaMA model) LongT5Config configuration class: LongT5Model (LongT5 model) LongformerConfig configuration class: LongformerModel (Longformer model) LukeConfig configuration class: LukeModel (LUKE model) LxmertConfig configuration class: LxmertModel (LXMERT model) M2M100Config configuration class: M2M100Model (M2M100 model) MBartConfig configuration class: MBartModel (mBART model) MCTCTConfig configuration class: MCTCTModel (M-CTC-T model) MPNetConfig configuration class: MPNetModel (MPNet model) MT5Config configuration class: MT5Model (MT5 model) MarianConfig configuration class: MarianModel (Marian model) MarkupLMConfig configuration class: MarkupLMModel (MarkupLM model) Mask2FormerConfig configuration class: Mask2FormerModel (Mask2Former model) MaskFormerConfig configuration class: MaskFormerModel (MaskFormer model) MaskFormerSwinConfig configuration class: MaskFormerSwinModel (MaskFormerSwin model) MegaConfig configuration class: MegaModel (MEGA model) MegatronBertConfig configuration class: MegatronBertModel (Megatron-BERT model) MgpstrConfig configuration class: MgpstrForSceneTextRecognition (MGP-STR model) MistralConfig configuration class: MistralModel (Mistral model) MobileBertConfig configuration class: MobileBertModel (MobileBERT model) MobileNetV1Config configuration class: MobileNetV1Model (MobileNetV1 model) MobileNetV2Config configuration class: MobileNetV2Model (MobileNetV2 model) MobileViTConfig configuration class: MobileViTModel (MobileViT model) MobileViTV2Config configuration class: MobileViTV2Model (MobileViTV2 model) MptConfig configuration class: MptModel (MPT model) MraConfig configuration class: MraModel (MRA model) MvpConfig configuration class: MvpModel (MVP model) NatConfig configuration class: NatModel (NAT model) NezhaConfig configuration class: NezhaModel (Nezha model) NllbMoeConfig configuration class: NllbMoeModel (NLLB-MOE model) NystromformerConfig configuration class: NystromformerModel (Nyströmformer model) OPTConfig configuration class: OPTModel (OPT model) OneFormerConfig configuration class: OneFormerModel (OneFormer model) OpenAIGPTConfig configuration class: OpenAIGPTModel (OpenAI GPT model) OpenLlamaConfig configuration class: OpenLlamaModel (OpenLlama model) OwlViTConfig configuration class: OwlViTModel (OWL-ViT model) PLBartConfig configuration class: PLBartModel (PLBart model) PegasusConfig configuration class: PegasusModel (Pegasus model) PegasusXConfig configuration class: PegasusXModel (PEGASUS-X model) PerceiverConfig configuration class: PerceiverModel (Perceiver model) PersimmonConfig configuration class: PersimmonModel (Persimmon model) PoolFormerConfig configuration class: PoolFormerModel (PoolFormer model) ProphetNetConfig configuration class: ProphetNetModel (ProphetNet model) PvtConfig configuration class: PvtModel (PVT model) QDQBertConfig configuration class: QDQBertModel (QDQBert model) ReformerConfig configuration class: ReformerModel (Reformer model) RegNetConfig configuration class: RegNetModel (RegNet model) RemBertConfig configuration class: RemBertModel (RemBERT model) ResNetConfig configuration class: ResNetModel (ResNet model) RetriBertConfig configuration class: RetriBertModel (RetriBERT model) RoCBertConfig configuration class: RoCBertModel (RoCBert model) RoFormerConfig configuration class: RoFormerModel (RoFormer model) RobertaConfig configuration class: RobertaModel (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) RwkvConfig configuration class: RwkvModel (RWKV model) SEWConfig configuration class: SEWModel (SEW model) SEWDConfig configuration class: SEWDModel (SEW-D model) SamConfig configuration class: SamModel (SAM model) SegformerConfig configuration class: SegformerModel (SegFormer model) Speech2TextConfig configuration class: Speech2TextModel (Speech2Text model) SpeechT5Config configuration class: SpeechT5Model (SpeechT5 model) SplinterConfig configuration class: SplinterModel (Splinter model) SqueezeBertConfig configuration class: SqueezeBertModel (SqueezeBERT model) SwiftFormerConfig configuration class: SwiftFormerModel (SwiftFormer model) Swin2SRConfig configuration class: Swin2SRModel (Swin2SR model) SwinConfig configuration class: SwinModel (Swin Transformer model) Swinv2Config configuration class: Swinv2Model (Swin Transformer V2 model) SwitchTransformersConfig configuration class: SwitchTransformersModel (SwitchTransformers model) T5Config configuration class: T5Model (T5 model) TableTransformerConfig configuration class: TableTransformerModel (Table Transformer model) TapasConfig configuration class: TapasModel (TAPAS model) TimeSeriesTransformerConfig configuration class: TimeSeriesTransformerModel (Time Series Transformer model) TimesformerConfig configuration class: TimesformerModel (TimeSformer model) TimmBackboneConfig configuration class: TimmBackbone (TimmBackbone model) TrajectoryTransformerConfig configuration class: TrajectoryTransformerModel (Trajectory Transformer model) TransfoXLConfig configuration class: TransfoXLModel (Transformer-XL model) TvltConfig configuration class: TvltModel (TVLT model) UMT5Config configuration class: UMT5Model (UMT5 model) UniSpeechConfig configuration class: UniSpeechModel (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatModel (UniSpeechSat model) VanConfig configuration class: VanModel (VAN model) ViTConfig configuration class: ViTModel (ViT model) ViTHybridConfig configuration class: ViTHybridModel (ViT Hybrid model) ViTMAEConfig configuration class: ViTMAEModel (ViTMAE model) ViTMSNConfig configuration class: ViTMSNModel (ViTMSN model) VideoMAEConfig configuration class: VideoMAEModel (VideoMAE model) ViltConfig configuration class: ViltModel (ViLT model) VisionTextDualEncoderConfig configuration class: VisionTextDualEncoderModel (VisionTextDualEncoder model) VisualBertConfig configuration class: VisualBertModel (VisualBERT model) VitDetConfig configuration class: VitDetModel (VitDet model) VitsConfig configuration class: VitsModel (VITS model) VivitConfig configuration class: VivitModel (ViViT model) Wav2Vec2Config configuration class: Wav2Vec2Model (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerModel (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMModel (WavLM model) WhisperConfig configuration class: WhisperModel (Whisper model) XCLIPConfig configuration class: XCLIPModel (X-CLIP model) XGLMConfig configuration class: XGLMModel (XGLM model) XLMConfig configuration class: XLMModel (XLM model) XLMProphetNetConfig configuration class: XLMProphetNetModel (XLM-ProphetNet model) XLMRobertaConfig configuration class: XLMRobertaModel (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLModel (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetModel (XLNet model) XmodConfig configuration class: XmodModel (X-MOD model) YolosConfig configuration class: YolosModel (YOLOS model) YosoConfig configuration class: YosoModel (YOSO model) Instantiates one of the base model classes of the library from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModel >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModel.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the base model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertModel (ALBERT model) align — AlignModel (ALIGN model) altclip — AltCLIPModel (AltCLIP model) audio-spectrogram-transformer — ASTModel (Audio Spectrogram Transformer model) autoformer — AutoformerModel (Autoformer model) bark — BarkModel (Bark model) bart — BartModel (BART model) beit — BeitModel (BEiT model) bert — BertModel (BERT model) bert-generation — BertGenerationEncoder (Bert Generation model) big_bird — BigBirdModel (BigBird model) bigbird_pegasus — BigBirdPegasusModel (BigBird-Pegasus model) biogpt — BioGptModel (BioGpt model) bit — BitModel (BiT model) blenderbot — BlenderbotModel (Blenderbot model) blenderbot-small — BlenderbotSmallModel (BlenderbotSmall model) blip — BlipModel (BLIP model) blip-2 — Blip2Model (BLIP-2 model) bloom — BloomModel (BLOOM model) bridgetower — BridgeTowerModel (BridgeTower model) bros — BrosModel (BROS model) camembert — CamembertModel (CamemBERT model) canine — CanineModel (CANINE model) chinese_clip — ChineseCLIPModel (Chinese-CLIP model) clap — ClapModel (CLAP model) clip — CLIPModel (CLIP model) clipseg — CLIPSegModel (CLIPSeg model) code_llama — LlamaModel (CodeLlama model) codegen — CodeGenModel (CodeGen model) conditional_detr — ConditionalDetrModel (Conditional DETR model) convbert — ConvBertModel (ConvBERT model) convnext — ConvNextModel (ConvNeXT model) convnextv2 — ConvNextV2Model (ConvNeXTV2 model) cpmant — CpmAntModel (CPM-Ant model) ctrl — CTRLModel (CTRL model) cvt — CvtModel (CvT model) data2vec-audio — Data2VecAudioModel (Data2VecAudio model) data2vec-text — Data2VecTextModel (Data2VecText model) data2vec-vision — Data2VecVisionModel (Data2VecVision model) deberta — DebertaModel (DeBERTa model) deberta-v2 — DebertaV2Model (DeBERTa-v2 model) decision_transformer — DecisionTransformerModel (Decision Transformer model) deformable_detr — DeformableDetrModel (Deformable DETR model) deit — DeiTModel (DeiT model) deta — DetaModel (DETA model) detr — DetrModel (DETR model) dinat — DinatModel (DiNAT model) dinov2 — Dinov2Model (DINOv2 model) distilbert — DistilBertModel (DistilBERT model) donut-swin — DonutSwinModel (DonutSwin model) dpr — DPRQuestionEncoder (DPR model) dpt — DPTModel (DPT model) efficientformer — EfficientFormerModel (EfficientFormer model) efficientnet — EfficientNetModel (EfficientNet model) electra — ElectraModel (ELECTRA model) encodec — EncodecModel (EnCodec model) ernie — ErnieModel (ERNIE model) ernie_m — ErnieMModel (ErnieM model) esm — EsmModel (ESM model) falcon — FalconModel (Falcon model) flaubert — FlaubertModel (FlauBERT model) flava — FlavaModel (FLAVA model) fnet — FNetModel (FNet model) focalnet — FocalNetModel (FocalNet model) fsmt — FSMTModel (FairSeq Machine-Translation model) funnel — FunnelModel or FunnelBaseModel (Funnel Transformer model) git — GitModel (GIT model) glpn — GLPNModel (GLPN model) gpt-sw3 — GPT2Model (GPT-Sw3 model) gpt2 — GPT2Model (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeModel (GPTBigCode model) gpt_neo — GPTNeoModel (GPT Neo model) gpt_neox — GPTNeoXModel (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseModel (GPT NeoX Japanese model) gptj — GPTJModel (GPT-J model) gptsan-japanese — GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) graphormer — GraphormerModel (Graphormer model) groupvit — GroupViTModel (GroupViT model) hubert — HubertModel (Hubert model) ibert — IBertModel (I-BERT model) idefics — IdeficsModel (IDEFICS model) imagegpt — ImageGPTModel (ImageGPT model) informer — InformerModel (Informer model) jukebox — JukeboxModel (Jukebox model) layoutlm — LayoutLMModel (LayoutLM model) layoutlmv2 — LayoutLMv2Model (LayoutLMv2 model) layoutlmv3 — LayoutLMv3Model (LayoutLMv3 model) led — LEDModel (LED model) levit — LevitModel (LeViT model) lilt — LiltModel (LiLT model) llama — LlamaModel (LLaMA model) longformer — LongformerModel (Longformer model) longt5 — LongT5Model (LongT5 model) luke — LukeModel (LUKE model) lxmert — LxmertModel (LXMERT model) m2m_100 — M2M100Model (M2M100 model) marian — MarianModel (Marian model) markuplm — MarkupLMModel (MarkupLM model) mask2former — Mask2FormerModel (Mask2Former model) maskformer — MaskFormerModel (MaskFormer model) maskformer-swin — MaskFormerSwinModel (MaskFormerSwin model) mbart — MBartModel (mBART model) mctct — MCTCTModel (M-CTC-T model) mega — MegaModel (MEGA model) megatron-bert — MegatronBertModel (Megatron-BERT model) mgp-str — MgpstrForSceneTextRecognition (MGP-STR model) mistral — MistralModel (Mistral model) mobilebert — MobileBertModel (MobileBERT model) mobilenet_v1 — MobileNetV1Model (MobileNetV1 model) mobilenet_v2 — MobileNetV2Model (MobileNetV2 model) mobilevit — MobileViTModel (MobileViT model) mobilevitv2 — MobileViTV2Model (MobileViTV2 model) mpnet — MPNetModel (MPNet model) mpt — MptModel (MPT model) mra — MraModel (MRA model) mt5 — MT5Model (MT5 model) mvp — MvpModel (MVP model) nat — NatModel (NAT model) nezha — NezhaModel (Nezha model) nllb-moe — NllbMoeModel (NLLB-MOE model) nystromformer — NystromformerModel (Nyströmformer model) oneformer — OneFormerModel (OneFormer model) open-llama — OpenLlamaModel (OpenLlama model) openai-gpt — OpenAIGPTModel (OpenAI GPT model) opt — OPTModel (OPT model) owlvit — OwlViTModel (OWL-ViT model) pegasus — PegasusModel (Pegasus model) pegasus_x — PegasusXModel (PEGASUS-X model) perceiver — PerceiverModel (Perceiver model) persimmon — PersimmonModel (Persimmon model) plbart — PLBartModel (PLBart model) poolformer — PoolFormerModel (PoolFormer model) prophetnet — ProphetNetModel (ProphetNet model) pvt — PvtModel (PVT model) qdqbert — QDQBertModel (QDQBert model) reformer — ReformerModel (Reformer model) regnet — RegNetModel (RegNet model) rembert — RemBertModel (RemBERT model) resnet — ResNetModel (ResNet model) retribert — RetriBertModel (RetriBERT model) roberta — RobertaModel (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) roc_bert — RoCBertModel (RoCBert model) roformer — RoFormerModel (RoFormer model) rwkv — RwkvModel (RWKV model) sam — SamModel (SAM model) segformer — SegformerModel (SegFormer model) sew — SEWModel (SEW model) sew-d — SEWDModel (SEW-D model) speech_to_text — Speech2TextModel (Speech2Text model) speecht5 — SpeechT5Model (SpeechT5 model) splinter — SplinterModel (Splinter model) squeezebert — SqueezeBertModel (SqueezeBERT model) swiftformer — SwiftFormerModel (SwiftFormer model) swin — SwinModel (Swin Transformer model) swin2sr — Swin2SRModel (Swin2SR model) swinv2 — Swinv2Model (Swin Transformer V2 model) switch_transformers — SwitchTransformersModel (SwitchTransformers model) t5 — T5Model (T5 model) table-transformer — TableTransformerModel (Table Transformer model) tapas — TapasModel (TAPAS model) time_series_transformer — TimeSeriesTransformerModel (Time Series Transformer model) timesformer — TimesformerModel (TimeSformer model) timm_backbone — TimmBackbone (TimmBackbone model) trajectory_transformer — TrajectoryTransformerModel (Trajectory Transformer model) transfo-xl — TransfoXLModel (Transformer-XL model) tvlt — TvltModel (TVLT model) umt5 — UMT5Model (UMT5 model) unispeech — UniSpeechModel (UniSpeech model) unispeech-sat — UniSpeechSatModel (UniSpeechSat model) van — VanModel (VAN model) videomae — VideoMAEModel (VideoMAE model) vilt — ViltModel (ViLT model) vision-text-dual-encoder — VisionTextDualEncoderModel (VisionTextDualEncoder model) visual_bert — VisualBertModel (VisualBERT model) vit — ViTModel (ViT model) vit_hybrid — ViTHybridModel (ViT Hybrid model) vit_mae — ViTMAEModel (ViTMAE model) vit_msn — ViTMSNModel (ViTMSN model) vitdet — VitDetModel (VitDet model) vits — VitsModel (VITS model) vivit — VivitModel (ViViT model) wav2vec2 — Wav2Vec2Model (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerModel (Wav2Vec2-Conformer model) wavlm — WavLMModel (WavLM model) whisper — WhisperModel (Whisper model) xclip — XCLIPModel (X-CLIP model) xglm — XGLMModel (XGLM model) xlm — XLMModel (XLM model) xlm-prophetnet — XLMProphetNetModel (XLM-ProphetNet model) xlm-roberta — XLMRobertaModel (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLModel (XLM-RoBERTa-XL model) xlnet — XLNetModel (XLNet model) xmod — XmodModel (X-MOD model) yolos — YolosModel (YOLOS model) yoso — YosoModel (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModel >>> >>> model = AutoModel.from_pretrained("bert-base-cased") >>> >>> model = AutoModel.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModel.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModel class transformers.TFAutoModel < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertModel (ALBERT model) BartConfig configuration class: TFBartModel (BART model) BertConfig configuration class: TFBertModel (BERT model) BlenderbotConfig configuration class: TFBlenderbotModel (Blenderbot model) BlenderbotSmallConfig configuration class: TFBlenderbotSmallModel (BlenderbotSmall model) BlipConfig configuration class: TFBlipModel (BLIP model) CLIPConfig configuration class: TFCLIPModel (CLIP model) CTRLConfig configuration class: TFCTRLModel (CTRL model) CamembertConfig configuration class: TFCamembertModel (CamemBERT model) ConvBertConfig configuration class: TFConvBertModel (ConvBERT model) ConvNextConfig configuration class: TFConvNextModel (ConvNeXT model) CvtConfig configuration class: TFCvtModel (CvT model) DPRConfig configuration class: TFDPRQuestionEncoder (DPR model) Data2VecVisionConfig configuration class: TFData2VecVisionModel (Data2VecVision model) DebertaConfig configuration class: TFDebertaModel (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2Model (DeBERTa-v2 model) DeiTConfig configuration class: TFDeiTModel (DeiT model) DistilBertConfig configuration class: TFDistilBertModel (DistilBERT model) EfficientFormerConfig configuration class: TFEfficientFormerModel (EfficientFormer model) ElectraConfig configuration class: TFElectraModel (ELECTRA model) EsmConfig configuration class: TFEsmModel (ESM model) FlaubertConfig configuration class: TFFlaubertModel (FlauBERT model) FunnelConfig configuration class: TFFunnelModel or TFFunnelBaseModel (Funnel Transformer model) GPT2Config configuration class: TFGPT2Model (OpenAI GPT-2 model) GPTJConfig configuration class: TFGPTJModel (GPT-J model) GroupViTConfig configuration class: TFGroupViTModel (GroupViT model) HubertConfig configuration class: TFHubertModel (Hubert model) LEDConfig configuration class: TFLEDModel (LED model) LayoutLMConfig configuration class: TFLayoutLMModel (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3Model (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerModel (Longformer model) LxmertConfig configuration class: TFLxmertModel (LXMERT model) MBartConfig configuration class: TFMBartModel (mBART model) MPNetConfig configuration class: TFMPNetModel (MPNet model) MT5Config configuration class: TFMT5Model (MT5 model) MarianConfig configuration class: TFMarianModel (Marian model) MobileBertConfig configuration class: TFMobileBertModel (MobileBERT model) MobileViTConfig configuration class: TFMobileViTModel (MobileViT model) OPTConfig configuration class: TFOPTModel (OPT model) OpenAIGPTConfig configuration class: TFOpenAIGPTModel (OpenAI GPT model) PegasusConfig configuration class: TFPegasusModel (Pegasus model) RegNetConfig configuration class: TFRegNetModel (RegNet model) RemBertConfig configuration class: TFRemBertModel (RemBERT model) ResNetConfig configuration class: TFResNetModel (ResNet model) RoFormerConfig configuration class: TFRoFormerModel (RoFormer model) RobertaConfig configuration class: TFRobertaModel (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) SamConfig configuration class: TFSamModel (SAM model) SegformerConfig configuration class: TFSegformerModel (SegFormer model) Speech2TextConfig configuration class: TFSpeech2TextModel (Speech2Text model) SwinConfig configuration class: TFSwinModel (Swin Transformer model) T5Config configuration class: TFT5Model (T5 model) TapasConfig configuration class: TFTapasModel (TAPAS model) TransfoXLConfig configuration class: TFTransfoXLModel (Transformer-XL model) ViTConfig configuration class: TFViTModel (ViT model) ViTMAEConfig configuration class: TFViTMAEModel (ViTMAE model) VisionTextDualEncoderConfig configuration class: TFVisionTextDualEncoderModel (VisionTextDualEncoder model) Wav2Vec2Config configuration class: TFWav2Vec2Model (Wav2Vec2 model) WhisperConfig configuration class: TFWhisperModel (Whisper model) XGLMConfig configuration class: TFXGLMModel (XGLM model) XLMConfig configuration class: TFXLMModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaModel (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetModel (XLNet model) Instantiates one of the base model classes of the library from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModel >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModel.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the base model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertModel (ALBERT model) bart — TFBartModel (BART model) bert — TFBertModel (BERT model) blenderbot — TFBlenderbotModel (Blenderbot model) blenderbot-small — TFBlenderbotSmallModel (BlenderbotSmall model) blip — TFBlipModel (BLIP model) camembert — TFCamembertModel (CamemBERT model) clip — TFCLIPModel (CLIP model) convbert — TFConvBertModel (ConvBERT model) convnext — TFConvNextModel (ConvNeXT model) ctrl — TFCTRLModel (CTRL model) cvt — TFCvtModel (CvT model) data2vec-vision — TFData2VecVisionModel (Data2VecVision model) deberta — TFDebertaModel (DeBERTa model) deberta-v2 — TFDebertaV2Model (DeBERTa-v2 model) deit — TFDeiTModel (DeiT model) distilbert — TFDistilBertModel (DistilBERT model) dpr — TFDPRQuestionEncoder (DPR model) efficientformer — TFEfficientFormerModel (EfficientFormer model) electra — TFElectraModel (ELECTRA model) esm — TFEsmModel (ESM model) flaubert — TFFlaubertModel (FlauBERT model) funnel — TFFunnelModel or TFFunnelBaseModel (Funnel Transformer model) gpt-sw3 — TFGPT2Model (GPT-Sw3 model) gpt2 — TFGPT2Model (OpenAI GPT-2 model) gptj — TFGPTJModel (GPT-J model) groupvit — TFGroupViTModel (GroupViT model) hubert — TFHubertModel (Hubert model) layoutlm — TFLayoutLMModel (LayoutLM model) layoutlmv3 — TFLayoutLMv3Model (LayoutLMv3 model) led — TFLEDModel (LED model) longformer — TFLongformerModel (Longformer model) lxmert — TFLxmertModel (LXMERT model) marian — TFMarianModel (Marian model) mbart — TFMBartModel (mBART model) mobilebert — TFMobileBertModel (MobileBERT model) mobilevit — TFMobileViTModel (MobileViT model) mpnet — TFMPNetModel (MPNet model) mt5 — TFMT5Model (MT5 model) openai-gpt — TFOpenAIGPTModel (OpenAI GPT model) opt — TFOPTModel (OPT model) pegasus — TFPegasusModel (Pegasus model) regnet — TFRegNetModel (RegNet model) rembert — TFRemBertModel (RemBERT model) resnet — TFResNetModel (ResNet model) roberta — TFRobertaModel (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) roformer — TFRoFormerModel (RoFormer model) sam — TFSamModel (SAM model) segformer — TFSegformerModel (SegFormer model) speech_to_text — TFSpeech2TextModel (Speech2Text model) swin — TFSwinModel (Swin Transformer model) t5 — TFT5Model (T5 model) tapas — TFTapasModel (TAPAS model) transfo-xl — TFTransfoXLModel (Transformer-XL model) vision-text-dual-encoder — TFVisionTextDualEncoderModel (VisionTextDualEncoder model) vit — TFViTModel (ViT model) vit_mae — TFViTMAEModel (ViTMAE model) wav2vec2 — TFWav2Vec2Model (Wav2Vec2 model) whisper — TFWhisperModel (Whisper model) xglm — TFXGLMModel (XGLM model) xlm — TFXLMModel (XLM model) xlm-roberta — TFXLMRobertaModel (XLM-RoBERTa model) xlnet — TFXLNetModel (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModel >>> >>> model = TFAutoModel.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModel.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModel.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModel class transformers.FlaxAutoModel < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertModel (ALBERT model) BartConfig configuration class: FlaxBartModel (BART model) BeitConfig configuration class: FlaxBeitModel (BEiT model) BertConfig configuration class: FlaxBertModel (BERT model) BigBirdConfig configuration class: FlaxBigBirdModel (BigBird model) BlenderbotConfig configuration class: FlaxBlenderbotModel (Blenderbot model) BlenderbotSmallConfig configuration class: FlaxBlenderbotSmallModel (BlenderbotSmall model) BloomConfig configuration class: FlaxBloomModel (BLOOM model) CLIPConfig configuration class: FlaxCLIPModel (CLIP model) DistilBertConfig configuration class: FlaxDistilBertModel (DistilBERT model) ElectraConfig configuration class: FlaxElectraModel (ELECTRA model) GPT2Config configuration class: FlaxGPT2Model (OpenAI GPT-2 model) GPTJConfig configuration class: FlaxGPTJModel (GPT-J model) GPTNeoConfig configuration class: FlaxGPTNeoModel (GPT Neo model) LongT5Config configuration class: FlaxLongT5Model (LongT5 model) MBartConfig configuration class: FlaxMBartModel (mBART model) MT5Config configuration class: FlaxMT5Model (MT5 model) MarianConfig configuration class: FlaxMarianModel (Marian model) OPTConfig configuration class: FlaxOPTModel (OPT model) PegasusConfig configuration class: FlaxPegasusModel (Pegasus model) RegNetConfig configuration class: FlaxRegNetModel (RegNet model) ResNetConfig configuration class: FlaxResNetModel (ResNet model) RoFormerConfig configuration class: FlaxRoFormerModel (RoFormer model) RobertaConfig configuration class: FlaxRobertaModel (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) T5Config configuration class: FlaxT5Model (T5 model) ViTConfig configuration class: FlaxViTModel (ViT model) VisionTextDualEncoderConfig configuration class: FlaxVisionTextDualEncoderModel (VisionTextDualEncoder model) Wav2Vec2Config configuration class: FlaxWav2Vec2Model (Wav2Vec2 model) WhisperConfig configuration class: FlaxWhisperModel (Whisper model) XGLMConfig configuration class: FlaxXGLMModel (XGLM model) XLMRobertaConfig configuration class: FlaxXLMRobertaModel (XLM-RoBERTa model) Instantiates one of the base model classes of the library from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModel >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModel.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the base model classes of the library from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertModel (ALBERT model) bart — FlaxBartModel (BART model) beit — FlaxBeitModel (BEiT model) bert — FlaxBertModel (BERT model) big_bird — FlaxBigBirdModel (BigBird model) blenderbot — FlaxBlenderbotModel (Blenderbot model) blenderbot-small — FlaxBlenderbotSmallModel (BlenderbotSmall model) bloom — FlaxBloomModel (BLOOM model) clip — FlaxCLIPModel (CLIP model) distilbert — FlaxDistilBertModel (DistilBERT model) electra — FlaxElectraModel (ELECTRA model) gpt-sw3 — FlaxGPT2Model (GPT-Sw3 model) gpt2 — FlaxGPT2Model (OpenAI GPT-2 model) gpt_neo — FlaxGPTNeoModel (GPT Neo model) gptj — FlaxGPTJModel (GPT-J model) longt5 — FlaxLongT5Model (LongT5 model) marian — FlaxMarianModel (Marian model) mbart — FlaxMBartModel (mBART model) mt5 — FlaxMT5Model (MT5 model) opt — FlaxOPTModel (OPT model) pegasus — FlaxPegasusModel (Pegasus model) regnet — FlaxRegNetModel (RegNet model) resnet — FlaxResNetModel (ResNet model) roberta — FlaxRobertaModel (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormModel (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerModel (RoFormer model) t5 — FlaxT5Model (T5 model) vision-text-dual-encoder — FlaxVisionTextDualEncoderModel (VisionTextDualEncoder model) vit — FlaxViTModel (ViT model) wav2vec2 — FlaxWav2Vec2Model (Wav2Vec2 model) whisper — FlaxWhisperModel (Whisper model) xglm — FlaxXGLMModel (XGLM model) xlm-roberta — FlaxXLMRobertaModel (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModel >>> >>> model = FlaxAutoModel.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModel.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModel.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) Generic pretraining classes The following auto classes are available for instantiating a model with a pretraining head. AutoModelForPreTraining class transformers.AutoModelForPreTraining < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForPreTraining (ALBERT model) BartConfig configuration class: BartForConditionalGeneration (BART model) BertConfig configuration class: BertForPreTraining (BERT model) BigBirdConfig configuration class: BigBirdForPreTraining (BigBird model) BloomConfig configuration class: BloomForCausalLM (BLOOM model) CTRLConfig configuration class: CTRLLMHeadModel (CTRL model) CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model) Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model) DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: ElectraForPreTraining (ELECTRA model) ErnieConfig configuration class: ErnieForPreTraining (ERNIE model) FNetConfig configuration class: FNetForPreTraining (FNet model) FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model) FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model) FlavaConfig configuration class: FlavaForPreTraining (FLAVA model) FunnelConfig configuration class: FunnelForPreTraining (Funnel Transformer model) GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForCausalLM (GPTBigCode model) GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) IBertConfig configuration class: IBertForMaskedLM (I-BERT model) IdeficsConfig configuration class: IdeficsForVisionText2Text (IDEFICS model) LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model) LongformerConfig configuration class: LongformerForMaskedLM (Longformer model) LukeConfig configuration class: LukeForMaskedLM (LUKE model) LxmertConfig configuration class: LxmertForPreTraining (LXMERT model) MPNetConfig configuration class: MPNetForMaskedLM (MPNet model) MegaConfig configuration class: MegaForMaskedLM (MEGA model) MegatronBertConfig configuration class: MegatronBertForPreTraining (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForPreTraining (MobileBERT model) MptConfig configuration class: MptForCausalLM (MPT model) MraConfig configuration class: MraForMaskedLM (MRA model) MvpConfig configuration class: MvpForConditionalGeneration (MVP model) NezhaConfig configuration class: NezhaForPreTraining (Nezha model) NllbMoeConfig configuration class: NllbMoeForConditionalGeneration (NLLB-MOE model) OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model) RetriBertConfig configuration class: RetriBertModel (RetriBERT model) RoCBertConfig configuration class: RoCBertForPreTraining (RoCBert model) RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) RwkvConfig configuration class: RwkvForCausalLM (RWKV model) SplinterConfig configuration class: SplinterForPreTraining (Splinter model) SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model) SwitchTransformersConfig configuration class: SwitchTransformersForConditionalGeneration (SwitchTransformers model) T5Config configuration class: T5ForConditionalGeneration (T5 model) TapasConfig configuration class: TapasForMaskedLM (TAPAS model) TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model) TvltConfig configuration class: TvltForPreTraining (TVLT model) UniSpeechConfig configuration class: UniSpeechForPreTraining (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatForPreTraining (UniSpeechSat model) ViTMAEConfig configuration class: ViTMAEForPreTraining (ViTMAE model) VideoMAEConfig configuration class: VideoMAEForPreTraining (VideoMAE model) VisualBertConfig configuration class: VisualBertForPreTraining (VisualBERT model) Wav2Vec2Config configuration class: Wav2Vec2ForPreTraining (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForPreTraining (Wav2Vec2-Conformer model) XLMConfig configuration class: XLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetLMHeadModel (XLNet model) XmodConfig configuration class: XmodForMaskedLM (X-MOD model) Instantiates one of the model classes of the library (with a pretraining head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForPreTraining >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForPreTraining.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertForPreTraining (ALBERT model) bart — BartForConditionalGeneration (BART model) bert — BertForPreTraining (BERT model) big_bird — BigBirdForPreTraining (BigBird model) bloom — BloomForCausalLM (BLOOM model) camembert — CamembertForMaskedLM (CamemBERT model) ctrl — CTRLLMHeadModel (CTRL model) data2vec-text — Data2VecTextForMaskedLM (Data2VecText model) deberta — DebertaForMaskedLM (DeBERTa model) deberta-v2 — DebertaV2ForMaskedLM (DeBERTa-v2 model) distilbert — DistilBertForMaskedLM (DistilBERT model) electra — ElectraForPreTraining (ELECTRA model) ernie — ErnieForPreTraining (ERNIE model) flaubert — FlaubertWithLMHeadModel (FlauBERT model) flava — FlavaForPreTraining (FLAVA model) fnet — FNetForPreTraining (FNet model) fsmt — FSMTForConditionalGeneration (FairSeq Machine-Translation model) funnel — FunnelForPreTraining (Funnel Transformer model) gpt-sw3 — GPT2LMHeadModel (GPT-Sw3 model) gpt2 — GPT2LMHeadModel (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForCausalLM (GPTBigCode model) gptsan-japanese — GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) ibert — IBertForMaskedLM (I-BERT model) idefics — IdeficsForVisionText2Text (IDEFICS model) layoutlm — LayoutLMForMaskedLM (LayoutLM model) longformer — LongformerForMaskedLM (Longformer model) luke — LukeForMaskedLM (LUKE model) lxmert — LxmertForPreTraining (LXMERT model) mega — MegaForMaskedLM (MEGA model) megatron-bert — MegatronBertForPreTraining (Megatron-BERT model) mobilebert — MobileBertForPreTraining (MobileBERT model) mpnet — MPNetForMaskedLM (MPNet model) mpt — MptForCausalLM (MPT model) mra — MraForMaskedLM (MRA model) mvp — MvpForConditionalGeneration (MVP model) nezha — NezhaForPreTraining (Nezha model) nllb-moe — NllbMoeForConditionalGeneration (NLLB-MOE model) openai-gpt — OpenAIGPTLMHeadModel (OpenAI GPT model) retribert — RetriBertModel (RetriBERT model) roberta — RobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForPreTraining (RoCBert model) rwkv — RwkvForCausalLM (RWKV model) splinter — SplinterForPreTraining (Splinter model) squeezebert — SqueezeBertForMaskedLM (SqueezeBERT model) switch_transformers — SwitchTransformersForConditionalGeneration (SwitchTransformers model) t5 — T5ForConditionalGeneration (T5 model) tapas — TapasForMaskedLM (TAPAS model) transfo-xl — TransfoXLLMHeadModel (Transformer-XL model) tvlt — TvltForPreTraining (TVLT model) unispeech — UniSpeechForPreTraining (UniSpeech model) unispeech-sat — UniSpeechSatForPreTraining (UniSpeechSat model) videomae — VideoMAEForPreTraining (VideoMAE model) visual_bert — VisualBertForPreTraining (VisualBERT model) vit_mae — ViTMAEForPreTraining (ViTMAE model) wav2vec2 — Wav2Vec2ForPreTraining (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerForPreTraining (Wav2Vec2-Conformer model) xlm — XLMWithLMHeadModel (XLM model) xlm-roberta — XLMRobertaForMaskedLM (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) xlnet — XLNetLMHeadModel (XLNet model) xmod — XmodForMaskedLM (X-MOD model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForPreTraining >>> >>> model = AutoModelForPreTraining.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForPreTraining.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForPreTraining.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForPreTraining class transformers.TFAutoModelForPreTraining < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForPreTraining (ALBERT model) BartConfig configuration class: TFBartForConditionalGeneration (BART model) BertConfig configuration class: TFBertForPreTraining (BERT model) CTRLConfig configuration class: TFCTRLLMHeadModel (CTRL model) CamembertConfig configuration class: TFCamembertForMaskedLM (CamemBERT model) DistilBertConfig configuration class: TFDistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: TFElectraForPreTraining (ELECTRA model) FlaubertConfig configuration class: TFFlaubertWithLMHeadModel (FlauBERT model) FunnelConfig configuration class: TFFunnelForPreTraining (Funnel Transformer model) GPT2Config configuration class: TFGPT2LMHeadModel (OpenAI GPT-2 model) LayoutLMConfig configuration class: TFLayoutLMForMaskedLM (LayoutLM model) LxmertConfig configuration class: TFLxmertForPreTraining (LXMERT model) MPNetConfig configuration class: TFMPNetForMaskedLM (MPNet model) MobileBertConfig configuration class: TFMobileBertForPreTraining (MobileBERT model) OpenAIGPTConfig configuration class: TFOpenAIGPTLMHeadModel (OpenAI GPT model) RobertaConfig configuration class: TFRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) T5Config configuration class: TFT5ForConditionalGeneration (T5 model) TapasConfig configuration class: TFTapasForMaskedLM (TAPAS model) TransfoXLConfig configuration class: TFTransfoXLLMHeadModel (Transformer-XL model) ViTMAEConfig configuration class: TFViTMAEForPreTraining (ViTMAE model) XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForMaskedLM (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetLMHeadModel (XLNet model) Instantiates one of the model classes of the library (with a pretraining head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForPreTraining >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForPreTraining.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertForPreTraining (ALBERT model) bart — TFBartForConditionalGeneration (BART model) bert — TFBertForPreTraining (BERT model) camembert — TFCamembertForMaskedLM (CamemBERT model) ctrl — TFCTRLLMHeadModel (CTRL model) distilbert — TFDistilBertForMaskedLM (DistilBERT model) electra — TFElectraForPreTraining (ELECTRA model) flaubert — TFFlaubertWithLMHeadModel (FlauBERT model) funnel — TFFunnelForPreTraining (Funnel Transformer model) gpt-sw3 — TFGPT2LMHeadModel (GPT-Sw3 model) gpt2 — TFGPT2LMHeadModel (OpenAI GPT-2 model) layoutlm — TFLayoutLMForMaskedLM (LayoutLM model) lxmert — TFLxmertForPreTraining (LXMERT model) mobilebert — TFMobileBertForPreTraining (MobileBERT model) mpnet — TFMPNetForMaskedLM (MPNet model) openai-gpt — TFOpenAIGPTLMHeadModel (OpenAI GPT model) roberta — TFRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) t5 — TFT5ForConditionalGeneration (T5 model) tapas — TFTapasForMaskedLM (TAPAS model) transfo-xl — TFTransfoXLLMHeadModel (Transformer-XL model) vit_mae — TFViTMAEForPreTraining (ViTMAE model) xlm — TFXLMWithLMHeadModel (XLM model) xlm-roberta — TFXLMRobertaForMaskedLM (XLM-RoBERTa model) xlnet — TFXLNetLMHeadModel (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModelForPreTraining >>> >>> model = TFAutoModelForPreTraining.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForPreTraining.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForPreTraining.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForPreTraining class transformers.FlaxAutoModelForPreTraining < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForPreTraining (ALBERT model) BartConfig configuration class: FlaxBartForConditionalGeneration (BART model) BertConfig configuration class: FlaxBertForPreTraining (BERT model) BigBirdConfig configuration class: FlaxBigBirdForPreTraining (BigBird model) ElectraConfig configuration class: FlaxElectraForPreTraining (ELECTRA model) LongT5Config configuration class: FlaxLongT5ForConditionalGeneration (LongT5 model) MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model) MT5Config configuration class: FlaxMT5ForConditionalGeneration (MT5 model) RoFormerConfig configuration class: FlaxRoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: FlaxRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) T5Config configuration class: FlaxT5ForConditionalGeneration (T5 model) Wav2Vec2Config configuration class: FlaxWav2Vec2ForPreTraining (Wav2Vec2 model) WhisperConfig configuration class: FlaxWhisperForConditionalGeneration (Whisper model) XLMRobertaConfig configuration class: FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a pretraining head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForPreTraining >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForPreTraining.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertForPreTraining (ALBERT model) bart — FlaxBartForConditionalGeneration (BART model) bert — FlaxBertForPreTraining (BERT model) big_bird — FlaxBigBirdForPreTraining (BigBird model) electra — FlaxElectraForPreTraining (ELECTRA model) longt5 — FlaxLongT5ForConditionalGeneration (LongT5 model) mbart — FlaxMBartForConditionalGeneration (mBART model) mt5 — FlaxMT5ForConditionalGeneration (MT5 model) roberta — FlaxRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForMaskedLM (RoFormer model) t5 — FlaxT5ForConditionalGeneration (T5 model) wav2vec2 — FlaxWav2Vec2ForPreTraining (Wav2Vec2 model) whisper — FlaxWhisperForConditionalGeneration (Whisper model) xlm-roberta — FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForPreTraining >>> >>> model = FlaxAutoModelForPreTraining.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForPreTraining.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForPreTraining.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) Natural Language Processing The following auto classes are available for the following natural language processing tasks. AutoModelForCausalLM class transformers.AutoModelForCausalLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: BartForCausalLM (BART model) BertConfig configuration class: BertLMHeadModel (BERT model) BertGenerationConfig configuration class: BertGenerationDecoder (Bert Generation model) BigBirdConfig configuration class: BigBirdForCausalLM (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusForCausalLM (BigBird-Pegasus model) BioGptConfig configuration class: BioGptForCausalLM (BioGpt model) BlenderbotConfig configuration class: BlenderbotForCausalLM (Blenderbot model) BlenderbotSmallConfig configuration class: BlenderbotSmallForCausalLM (BlenderbotSmall model) BloomConfig configuration class: BloomForCausalLM (BLOOM model) CTRLConfig configuration class: CTRLLMHeadModel (CTRL model) CamembertConfig configuration class: CamembertForCausalLM (CamemBERT model) CodeGenConfig configuration class: CodeGenForCausalLM (CodeGen model) CpmAntConfig configuration class: CpmAntForCausalLM (CPM-Ant model) Data2VecTextConfig configuration class: Data2VecTextForCausalLM (Data2VecText model) ElectraConfig configuration class: ElectraForCausalLM (ELECTRA model) ErnieConfig configuration class: ErnieForCausalLM (ERNIE model) FalconConfig configuration class: FalconForCausalLM (Falcon model) GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForCausalLM (GPTBigCode model) GPTJConfig configuration class: GPTJForCausalLM (GPT-J model) GPTNeoConfig configuration class: GPTNeoForCausalLM (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForCausalLM (GPT NeoX model) GPTNeoXJapaneseConfig configuration class: GPTNeoXJapaneseForCausalLM (GPT NeoX Japanese model) GitConfig configuration class: GitForCausalLM (GIT model) LlamaConfig configuration class: LlamaForCausalLM (LLaMA model) MBartConfig configuration class: MBartForCausalLM (mBART model) MarianConfig configuration class: MarianForCausalLM (Marian model) MegaConfig configuration class: MegaForCausalLM (MEGA model) MegatronBertConfig configuration class: MegatronBertForCausalLM (Megatron-BERT model) MistralConfig configuration class: MistralForCausalLM (Mistral model) MptConfig configuration class: MptForCausalLM (MPT model) MusicgenConfig configuration class: MusicgenForCausalLM (MusicGen model) MvpConfig configuration class: MvpForCausalLM (MVP model) OPTConfig configuration class: OPTForCausalLM (OPT model) OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model) OpenLlamaConfig configuration class: OpenLlamaForCausalLM (OpenLlama model) PLBartConfig configuration class: PLBartForCausalLM (PLBart model) PegasusConfig configuration class: PegasusForCausalLM (Pegasus model) PersimmonConfig configuration class: PersimmonForCausalLM (Persimmon model) ProphetNetConfig configuration class: ProphetNetForCausalLM (ProphetNet model) QDQBertConfig configuration class: QDQBertLMHeadModel (QDQBert model) ReformerConfig configuration class: ReformerModelWithLMHead (Reformer model) RemBertConfig configuration class: RemBertForCausalLM (RemBERT model) RoCBertConfig configuration class: RoCBertForCausalLM (RoCBert model) RoFormerConfig configuration class: RoFormerForCausalLM (RoFormer model) RobertaConfig configuration class: RobertaForCausalLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) RwkvConfig configuration class: RwkvForCausalLM (RWKV model) Speech2Text2Config configuration class: Speech2Text2ForCausalLM (Speech2Text2 model) TrOCRConfig configuration class: TrOCRForCausalLM (TrOCR model) TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model) XGLMConfig configuration class: XGLMForCausalLM (XGLM model) XLMConfig configuration class: XLMWithLMHeadModel (XLM model) XLMProphetNetConfig configuration class: XLMProphetNetForCausalLM (XLM-ProphetNet model) XLMRobertaConfig configuration class: XLMRobertaForCausalLM (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForCausalLM (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetLMHeadModel (XLNet model) XmodConfig configuration class: XmodForCausalLM (X-MOD model) Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForCausalLM >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForCausalLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bart — BartForCausalLM (BART model) bert — BertLMHeadModel (BERT model) bert-generation — BertGenerationDecoder (Bert Generation model) big_bird — BigBirdForCausalLM (BigBird model) bigbird_pegasus — BigBirdPegasusForCausalLM (BigBird-Pegasus model) biogpt — BioGptForCausalLM (BioGpt model) blenderbot — BlenderbotForCausalLM (Blenderbot model) blenderbot-small — BlenderbotSmallForCausalLM (BlenderbotSmall model) bloom — BloomForCausalLM (BLOOM model) camembert — CamembertForCausalLM (CamemBERT model) code_llama — LlamaForCausalLM (CodeLlama model) codegen — CodeGenForCausalLM (CodeGen model) cpmant — CpmAntForCausalLM (CPM-Ant model) ctrl — CTRLLMHeadModel (CTRL model) data2vec-text — Data2VecTextForCausalLM (Data2VecText model) electra — ElectraForCausalLM (ELECTRA model) ernie — ErnieForCausalLM (ERNIE model) falcon — FalconForCausalLM (Falcon model) git — GitForCausalLM (GIT model) gpt-sw3 — GPT2LMHeadModel (GPT-Sw3 model) gpt2 — GPT2LMHeadModel (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForCausalLM (GPTBigCode model) gpt_neo — GPTNeoForCausalLM (GPT Neo model) gpt_neox — GPTNeoXForCausalLM (GPT NeoX model) gpt_neox_japanese — GPTNeoXJapaneseForCausalLM (GPT NeoX Japanese model) gptj — GPTJForCausalLM (GPT-J model) llama — LlamaForCausalLM (LLaMA model) marian — MarianForCausalLM (Marian model) mbart — MBartForCausalLM (mBART model) mega — MegaForCausalLM (MEGA model) megatron-bert — MegatronBertForCausalLM (Megatron-BERT model) mistral — MistralForCausalLM (Mistral model) mpt — MptForCausalLM (MPT model) musicgen — MusicgenForCausalLM (MusicGen model) mvp — MvpForCausalLM (MVP model) open-llama — OpenLlamaForCausalLM (OpenLlama model) openai-gpt — OpenAIGPTLMHeadModel (OpenAI GPT model) opt — OPTForCausalLM (OPT model) pegasus — PegasusForCausalLM (Pegasus model) persimmon — PersimmonForCausalLM (Persimmon model) plbart — PLBartForCausalLM (PLBart model) prophetnet — ProphetNetForCausalLM (ProphetNet model) qdqbert — QDQBertLMHeadModel (QDQBert model) reformer — ReformerModelWithLMHead (Reformer model) rembert — RemBertForCausalLM (RemBERT model) roberta — RobertaForCausalLM (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForCausalLM (RoCBert model) roformer — RoFormerForCausalLM (RoFormer model) rwkv — RwkvForCausalLM (RWKV model) speech_to_text_2 — Speech2Text2ForCausalLM (Speech2Text2 model) transfo-xl — TransfoXLLMHeadModel (Transformer-XL model) trocr — TrOCRForCausalLM (TrOCR model) xglm — XGLMForCausalLM (XGLM model) xlm — XLMWithLMHeadModel (XLM model) xlm-prophetnet — XLMProphetNetForCausalLM (XLM-ProphetNet model) xlm-roberta — XLMRobertaForCausalLM (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForCausalLM (XLM-RoBERTa-XL model) xlnet — XLNetLMHeadModel (XLNet model) xmod — XmodForCausalLM (X-MOD model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForCausalLM >>> >>> model = AutoModelForCausalLM.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForCausalLM.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForCausalLM.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForCausalLM class transformers.TFAutoModelForCausalLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: TFBertLMHeadModel (BERT model) CTRLConfig configuration class: TFCTRLLMHeadModel (CTRL model) CamembertConfig configuration class: TFCamembertForCausalLM (CamemBERT model) GPT2Config configuration class: TFGPT2LMHeadModel (OpenAI GPT-2 model) GPTJConfig configuration class: TFGPTJForCausalLM (GPT-J model) OPTConfig configuration class: TFOPTForCausalLM (OPT model) OpenAIGPTConfig configuration class: TFOpenAIGPTLMHeadModel (OpenAI GPT model) RemBertConfig configuration class: TFRemBertForCausalLM (RemBERT model) RoFormerConfig configuration class: TFRoFormerForCausalLM (RoFormer model) RobertaConfig configuration class: TFRobertaForCausalLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) TransfoXLConfig configuration class: TFTransfoXLLMHeadModel (Transformer-XL model) XGLMConfig configuration class: TFXGLMForCausalLM (XGLM model) XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForCausalLM (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetLMHeadModel (XLNet model) Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForCausalLM >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForCausalLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bert — TFBertLMHeadModel (BERT model) camembert — TFCamembertForCausalLM (CamemBERT model) ctrl — TFCTRLLMHeadModel (CTRL model) gpt-sw3 — TFGPT2LMHeadModel (GPT-Sw3 model) gpt2 — TFGPT2LMHeadModel (OpenAI GPT-2 model) gptj — TFGPTJForCausalLM (GPT-J model) openai-gpt — TFOpenAIGPTLMHeadModel (OpenAI GPT model) opt — TFOPTForCausalLM (OPT model) rembert — TFRemBertForCausalLM (RemBERT model) roberta — TFRobertaForCausalLM (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForCausalLM (RoFormer model) transfo-xl — TFTransfoXLLMHeadModel (Transformer-XL model) xglm — TFXGLMForCausalLM (XGLM model) xlm — TFXLMWithLMHeadModel (XLM model) xlm-roberta — TFXLMRobertaForCausalLM (XLM-RoBERTa model) xlnet — TFXLNetLMHeadModel (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModelForCausalLM >>> >>> model = TFAutoModelForCausalLM.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForCausalLM.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForCausalLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForCausalLM class transformers.FlaxAutoModelForCausalLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: FlaxBartForCausalLM (BART model) BertConfig configuration class: FlaxBertForCausalLM (BERT model) BigBirdConfig configuration class: FlaxBigBirdForCausalLM (BigBird model) BloomConfig configuration class: FlaxBloomForCausalLM (BLOOM model) ElectraConfig configuration class: FlaxElectraForCausalLM (ELECTRA model) GPT2Config configuration class: FlaxGPT2LMHeadModel (OpenAI GPT-2 model) GPTJConfig configuration class: FlaxGPTJForCausalLM (GPT-J model) GPTNeoConfig configuration class: FlaxGPTNeoForCausalLM (GPT Neo model) OPTConfig configuration class: FlaxOPTForCausalLM (OPT model) RobertaConfig configuration class: FlaxRobertaForCausalLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) XGLMConfig configuration class: FlaxXGLMForCausalLM (XGLM model) XLMRobertaConfig configuration class: FlaxXLMRobertaForCausalLM (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForCausalLM >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForCausalLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bart — FlaxBartForCausalLM (BART model) bert — FlaxBertForCausalLM (BERT model) big_bird — FlaxBigBirdForCausalLM (BigBird model) bloom — FlaxBloomForCausalLM (BLOOM model) electra — FlaxElectraForCausalLM (ELECTRA model) gpt-sw3 — FlaxGPT2LMHeadModel (GPT-Sw3 model) gpt2 — FlaxGPT2LMHeadModel (OpenAI GPT-2 model) gpt_neo — FlaxGPTNeoForCausalLM (GPT Neo model) gptj — FlaxGPTJForCausalLM (GPT-J model) opt — FlaxOPTForCausalLM (OPT model) roberta — FlaxRobertaForCausalLM (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForCausalLM (RoBERTa-PreLayerNorm model) xglm — FlaxXGLMForCausalLM (XGLM model) xlm-roberta — FlaxXLMRobertaForCausalLM (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForCausalLM >>> >>> model = FlaxAutoModelForCausalLM.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForCausalLM.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForCausalLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForMaskedLM class transformers.AutoModelForMaskedLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForMaskedLM (ALBERT model) BartConfig configuration class: BartForConditionalGeneration (BART model) BertConfig configuration class: BertForMaskedLM (BERT model) BigBirdConfig configuration class: BigBirdForMaskedLM (BigBird model) CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model) ConvBertConfig configuration class: ConvBertForMaskedLM (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForMaskedLM (Data2VecText model) DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: ElectraForMaskedLM (ELECTRA model) ErnieConfig configuration class: ErnieForMaskedLM (ERNIE model) EsmConfig configuration class: EsmForMaskedLM (ESM model) FNetConfig configuration class: FNetForMaskedLM (FNet model) FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model) FunnelConfig configuration class: FunnelForMaskedLM (Funnel Transformer model) IBertConfig configuration class: IBertForMaskedLM (I-BERT model) LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model) LongformerConfig configuration class: LongformerForMaskedLM (Longformer model) LukeConfig configuration class: LukeForMaskedLM (LUKE model) MBartConfig configuration class: MBartForConditionalGeneration (mBART model) MPNetConfig configuration class: MPNetForMaskedLM (MPNet model) MegaConfig configuration class: MegaForMaskedLM (MEGA model) MegatronBertConfig configuration class: MegatronBertForMaskedLM (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForMaskedLM (MobileBERT model) MraConfig configuration class: MraForMaskedLM (MRA model) MvpConfig configuration class: MvpForConditionalGeneration (MVP model) NezhaConfig configuration class: NezhaForMaskedLM (Nezha model) NystromformerConfig configuration class: NystromformerForMaskedLM (Nyströmformer model) PerceiverConfig configuration class: PerceiverForMaskedLM (Perceiver model) QDQBertConfig configuration class: QDQBertForMaskedLM (QDQBert model) ReformerConfig configuration class: ReformerForMaskedLM (Reformer model) RemBertConfig configuration class: RemBertForMaskedLM (RemBERT model) RoCBertConfig configuration class: RoCBertForMaskedLM (RoCBert model) RoFormerConfig configuration class: RoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model) TapasConfig configuration class: TapasForMaskedLM (TAPAS model) Wav2Vec2Config configuration class: Wav2Vec2ForMaskedLM (Wav2Vec2 model) XLMConfig configuration class: XLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) XmodConfig configuration class: XmodForMaskedLM (X-MOD model) YosoConfig configuration class: YosoForMaskedLM (YOSO model) Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForMaskedLM >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForMaskedLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertForMaskedLM (ALBERT model) bart — BartForConditionalGeneration (BART model) bert — BertForMaskedLM (BERT model) big_bird — BigBirdForMaskedLM (BigBird model) camembert — CamembertForMaskedLM (CamemBERT model) convbert — ConvBertForMaskedLM (ConvBERT model) data2vec-text — Data2VecTextForMaskedLM (Data2VecText model) deberta — DebertaForMaskedLM (DeBERTa model) deberta-v2 — DebertaV2ForMaskedLM (DeBERTa-v2 model) distilbert — DistilBertForMaskedLM (DistilBERT model) electra — ElectraForMaskedLM (ELECTRA model) ernie — ErnieForMaskedLM (ERNIE model) esm — EsmForMaskedLM (ESM model) flaubert — FlaubertWithLMHeadModel (FlauBERT model) fnet — FNetForMaskedLM (FNet model) funnel — FunnelForMaskedLM (Funnel Transformer model) ibert — IBertForMaskedLM (I-BERT model) layoutlm — LayoutLMForMaskedLM (LayoutLM model) longformer — LongformerForMaskedLM (Longformer model) luke — LukeForMaskedLM (LUKE model) mbart — MBartForConditionalGeneration (mBART model) mega — MegaForMaskedLM (MEGA model) megatron-bert — MegatronBertForMaskedLM (Megatron-BERT model) mobilebert — MobileBertForMaskedLM (MobileBERT model) mpnet — MPNetForMaskedLM (MPNet model) mra — MraForMaskedLM (MRA model) mvp — MvpForConditionalGeneration (MVP model) nezha — NezhaForMaskedLM (Nezha model) nystromformer — NystromformerForMaskedLM (Nyströmformer model) perceiver — PerceiverForMaskedLM (Perceiver model) qdqbert — QDQBertForMaskedLM (QDQBert model) reformer — ReformerForMaskedLM (Reformer model) rembert — RemBertForMaskedLM (RemBERT model) roberta — RobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForMaskedLM (RoCBert model) roformer — RoFormerForMaskedLM (RoFormer model) squeezebert — SqueezeBertForMaskedLM (SqueezeBERT model) tapas — TapasForMaskedLM (TAPAS model) wav2vec2 — Wav2Vec2ForMaskedLM (Wav2Vec2 model) xlm — XLMWithLMHeadModel (XLM model) xlm-roberta — XLMRobertaForMaskedLM (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForMaskedLM (XLM-RoBERTa-XL model) xmod — XmodForMaskedLM (X-MOD model) yoso — YosoForMaskedLM (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForMaskedLM >>> >>> model = AutoModelForMaskedLM.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForMaskedLM.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForMaskedLM class transformers.TFAutoModelForMaskedLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForMaskedLM (ALBERT model) BertConfig configuration class: TFBertForMaskedLM (BERT model) CamembertConfig configuration class: TFCamembertForMaskedLM (CamemBERT model) ConvBertConfig configuration class: TFConvBertForMaskedLM (ConvBERT model) DebertaConfig configuration class: TFDebertaForMaskedLM (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForMaskedLM (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: TFElectraForMaskedLM (ELECTRA model) EsmConfig configuration class: TFEsmForMaskedLM (ESM model) FlaubertConfig configuration class: TFFlaubertWithLMHeadModel (FlauBERT model) FunnelConfig configuration class: TFFunnelForMaskedLM (Funnel Transformer model) LayoutLMConfig configuration class: TFLayoutLMForMaskedLM (LayoutLM model) LongformerConfig configuration class: TFLongformerForMaskedLM (Longformer model) MPNetConfig configuration class: TFMPNetForMaskedLM (MPNet model) MobileBertConfig configuration class: TFMobileBertForMaskedLM (MobileBERT model) RemBertConfig configuration class: TFRemBertForMaskedLM (RemBERT model) RoFormerConfig configuration class: TFRoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: TFRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) TapasConfig configuration class: TFTapasForMaskedLM (TAPAS model) XLMConfig configuration class: TFXLMWithLMHeadModel (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForMaskedLM (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForMaskedLM >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForMaskedLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertForMaskedLM (ALBERT model) bert — TFBertForMaskedLM (BERT model) camembert — TFCamembertForMaskedLM (CamemBERT model) convbert — TFConvBertForMaskedLM (ConvBERT model) deberta — TFDebertaForMaskedLM (DeBERTa model) deberta-v2 — TFDebertaV2ForMaskedLM (DeBERTa-v2 model) distilbert — TFDistilBertForMaskedLM (DistilBERT model) electra — TFElectraForMaskedLM (ELECTRA model) esm — TFEsmForMaskedLM (ESM model) flaubert — TFFlaubertWithLMHeadModel (FlauBERT model) funnel — TFFunnelForMaskedLM (Funnel Transformer model) layoutlm — TFLayoutLMForMaskedLM (LayoutLM model) longformer — TFLongformerForMaskedLM (Longformer model) mobilebert — TFMobileBertForMaskedLM (MobileBERT model) mpnet — TFMPNetForMaskedLM (MPNet model) rembert — TFRemBertForMaskedLM (RemBERT model) roberta — TFRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForMaskedLM (RoFormer model) tapas — TFTapasForMaskedLM (TAPAS model) xlm — TFXLMWithLMHeadModel (XLM model) xlm-roberta — TFXLMRobertaForMaskedLM (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, TFAutoModelForMaskedLM >>> >>> model = TFAutoModelForMaskedLM.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForMaskedLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForMaskedLM class transformers.FlaxAutoModelForMaskedLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForMaskedLM (ALBERT model) BartConfig configuration class: FlaxBartForConditionalGeneration (BART model) BertConfig configuration class: FlaxBertForMaskedLM (BERT model) BigBirdConfig configuration class: FlaxBigBirdForMaskedLM (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForMaskedLM (DistilBERT model) ElectraConfig configuration class: FlaxElectraForMaskedLM (ELECTRA model) MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model) RoFormerConfig configuration class: FlaxRoFormerForMaskedLM (RoFormer model) RobertaConfig configuration class: FlaxRobertaForMaskedLM (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForMaskedLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertForMaskedLM (ALBERT model) bart — FlaxBartForConditionalGeneration (BART model) bert — FlaxBertForMaskedLM (BERT model) big_bird — FlaxBigBirdForMaskedLM (BigBird model) distilbert — FlaxDistilBertForMaskedLM (DistilBERT model) electra — FlaxElectraForMaskedLM (ELECTRA model) mbart — FlaxMBartForConditionalGeneration (mBART model) roberta — FlaxRobertaForMaskedLM (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForMaskedLM (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForMaskedLM (RoFormer model) xlm-roberta — FlaxXLMRobertaForMaskedLM (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM >>> >>> model = FlaxAutoModelForMaskedLM.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForMaskedLM.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForMaskGeneration class transformers.AutoModelForMaskGeneration < source > ( *args **kwargs ) TFAutoModelForMaskGeneration class transformers.TFAutoModelForMaskGeneration < source > ( *args **kwargs ) AutoModelForSeq2SeqLM class transformers.AutoModelForSeq2SeqLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: BartForConditionalGeneration (BART model) BigBirdPegasusConfig configuration class: BigBirdPegasusForConditionalGeneration (BigBird-Pegasus model) BlenderbotConfig configuration class: BlenderbotForConditionalGeneration (Blenderbot model) BlenderbotSmallConfig configuration class: BlenderbotSmallForConditionalGeneration (BlenderbotSmall model) EncoderDecoderConfig configuration class: EncoderDecoderModel (Encoder decoder model) FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model) GPTSanJapaneseConfig configuration class: GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) LEDConfig configuration class: LEDForConditionalGeneration (LED model) LongT5Config configuration class: LongT5ForConditionalGeneration (LongT5 model) M2M100Config configuration class: M2M100ForConditionalGeneration (M2M100 model) MBartConfig configuration class: MBartForConditionalGeneration (mBART model) MT5Config configuration class: MT5ForConditionalGeneration (MT5 model) MarianConfig configuration class: MarianMTModel (Marian model) MvpConfig configuration class: MvpForConditionalGeneration (MVP model) NllbMoeConfig configuration class: NllbMoeForConditionalGeneration (NLLB-MOE model) PLBartConfig configuration class: PLBartForConditionalGeneration (PLBart model) PegasusConfig configuration class: PegasusForConditionalGeneration (Pegasus model) PegasusXConfig configuration class: PegasusXForConditionalGeneration (PEGASUS-X model) ProphetNetConfig configuration class: ProphetNetForConditionalGeneration (ProphetNet model) SwitchTransformersConfig configuration class: SwitchTransformersForConditionalGeneration (SwitchTransformers model) T5Config configuration class: T5ForConditionalGeneration (T5 model) UMT5Config configuration class: UMT5ForConditionalGeneration (UMT5 model) XLMProphetNetConfig configuration class: XLMProphetNetForConditionalGeneration (XLM-ProphetNet model) Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForSeq2SeqLM >>> >>> config = AutoConfig.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bart — BartForConditionalGeneration (BART model) bigbird_pegasus — BigBirdPegasusForConditionalGeneration (BigBird-Pegasus model) blenderbot — BlenderbotForConditionalGeneration (Blenderbot model) blenderbot-small — BlenderbotSmallForConditionalGeneration (BlenderbotSmall model) encoder-decoder — EncoderDecoderModel (Encoder decoder model) fsmt — FSMTForConditionalGeneration (FairSeq Machine-Translation model) gptsan-japanese — GPTSanJapaneseForConditionalGeneration (GPTSAN-japanese model) led — LEDForConditionalGeneration (LED model) longt5 — LongT5ForConditionalGeneration (LongT5 model) m2m_100 — M2M100ForConditionalGeneration (M2M100 model) marian — MarianMTModel (Marian model) mbart — MBartForConditionalGeneration (mBART model) mt5 — MT5ForConditionalGeneration (MT5 model) mvp — MvpForConditionalGeneration (MVP model) nllb-moe — NllbMoeForConditionalGeneration (NLLB-MOE model) pegasus — PegasusForConditionalGeneration (Pegasus model) pegasus_x — PegasusXForConditionalGeneration (PEGASUS-X model) plbart — PLBartForConditionalGeneration (PLBart model) prophetnet — ProphetNetForConditionalGeneration (ProphetNet model) switch_transformers — SwitchTransformersForConditionalGeneration (SwitchTransformers model) t5 — T5ForConditionalGeneration (T5 model) umt5 — UMT5ForConditionalGeneration (UMT5 model) xlm-prophetnet — XLMProphetNetForConditionalGeneration (XLM-ProphetNet model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForSeq2SeqLM >>> >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/t5_tf_model_config.json") >>> model = AutoModelForSeq2SeqLM.from_pretrained( ... "./tf_model/t5_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForSeq2SeqLM class transformers.TFAutoModelForSeq2SeqLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: TFBartForConditionalGeneration (BART model) BlenderbotConfig configuration class: TFBlenderbotForConditionalGeneration (Blenderbot model) BlenderbotSmallConfig configuration class: TFBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) EncoderDecoderConfig configuration class: TFEncoderDecoderModel (Encoder decoder model) LEDConfig configuration class: TFLEDForConditionalGeneration (LED model) MBartConfig configuration class: TFMBartForConditionalGeneration (mBART model) MT5Config configuration class: TFMT5ForConditionalGeneration (MT5 model) MarianConfig configuration class: TFMarianMTModel (Marian model) PegasusConfig configuration class: TFPegasusForConditionalGeneration (Pegasus model) T5Config configuration class: TFT5ForConditionalGeneration (T5 model) Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM >>> >>> config = AutoConfig.from_pretrained("t5-base") >>> model = TFAutoModelForSeq2SeqLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bart — TFBartForConditionalGeneration (BART model) blenderbot — TFBlenderbotForConditionalGeneration (Blenderbot model) blenderbot-small — TFBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) encoder-decoder — TFEncoderDecoderModel (Encoder decoder model) led — TFLEDForConditionalGeneration (LED model) marian — TFMarianMTModel (Marian model) mbart — TFMBartForConditionalGeneration (mBART model) mt5 — TFMT5ForConditionalGeneration (MT5 model) pegasus — TFPegasusForConditionalGeneration (Pegasus model) t5 — TFT5ForConditionalGeneration (T5 model) Examples: >>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM >>> >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> >>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-base", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json") >>> model = TFAutoModelForSeq2SeqLM.from_pretrained( ... "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForSeq2SeqLM class transformers.FlaxAutoModelForSeq2SeqLM < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BartConfig configuration class: FlaxBartForConditionalGeneration (BART model) BlenderbotConfig configuration class: FlaxBlenderbotForConditionalGeneration (Blenderbot model) BlenderbotSmallConfig configuration class: FlaxBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) EncoderDecoderConfig configuration class: FlaxEncoderDecoderModel (Encoder decoder model) LongT5Config configuration class: FlaxLongT5ForConditionalGeneration (LongT5 model) MBartConfig configuration class: FlaxMBartForConditionalGeneration (mBART model) MT5Config configuration class: FlaxMT5ForConditionalGeneration (MT5 model) MarianConfig configuration class: FlaxMarianMTModel (Marian model) PegasusConfig configuration class: FlaxPegasusForConditionalGeneration (Pegasus model) T5Config configuration class: FlaxT5ForConditionalGeneration (T5 model) Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM >>> >>> config = AutoConfig.from_pretrained("t5-base") >>> model = FlaxAutoModelForSeq2SeqLM.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bart — FlaxBartForConditionalGeneration (BART model) blenderbot — FlaxBlenderbotForConditionalGeneration (Blenderbot model) blenderbot-small — FlaxBlenderbotSmallForConditionalGeneration (BlenderbotSmall model) encoder-decoder — FlaxEncoderDecoderModel (Encoder decoder model) longt5 — FlaxLongT5ForConditionalGeneration (LongT5 model) marian — FlaxMarianMTModel (Marian model) mbart — FlaxMBartForConditionalGeneration (mBART model) mt5 — FlaxMT5ForConditionalGeneration (MT5 model) pegasus — FlaxPegasusForConditionalGeneration (Pegasus model) t5 — FlaxT5ForConditionalGeneration (T5 model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM >>> >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("t5-base", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json") >>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained( ... "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForSequenceClassification class transformers.AutoModelForSequenceClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForSequenceClassification (ALBERT model) BartConfig configuration class: BartForSequenceClassification (BART model) BertConfig configuration class: BertForSequenceClassification (BERT model) BigBirdConfig configuration class: BigBirdForSequenceClassification (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusForSequenceClassification (BigBird-Pegasus model) BioGptConfig configuration class: BioGptForSequenceClassification (BioGpt model) BloomConfig configuration class: BloomForSequenceClassification (BLOOM model) CTRLConfig configuration class: CTRLForSequenceClassification (CTRL model) CamembertConfig configuration class: CamembertForSequenceClassification (CamemBERT model) CanineConfig configuration class: CanineForSequenceClassification (CANINE model) ConvBertConfig configuration class: ConvBertForSequenceClassification (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForSequenceClassification (Data2VecText model) DebertaConfig configuration class: DebertaForSequenceClassification (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForSequenceClassification (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForSequenceClassification (DistilBERT model) ElectraConfig configuration class: ElectraForSequenceClassification (ELECTRA model) ErnieConfig configuration class: ErnieForSequenceClassification (ERNIE model) ErnieMConfig configuration class: ErnieMForSequenceClassification (ErnieM model) EsmConfig configuration class: EsmForSequenceClassification (ESM model) FNetConfig configuration class: FNetForSequenceClassification (FNet model) FalconConfig configuration class: FalconForSequenceClassification (Falcon model) FlaubertConfig configuration class: FlaubertForSequenceClassification (FlauBERT model) FunnelConfig configuration class: FunnelForSequenceClassification (Funnel Transformer model) GPT2Config configuration class: GPT2ForSequenceClassification (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForSequenceClassification (GPTBigCode model) GPTJConfig configuration class: GPTJForSequenceClassification (GPT-J model) GPTNeoConfig configuration class: GPTNeoForSequenceClassification (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForSequenceClassification (GPT NeoX model) IBertConfig configuration class: IBertForSequenceClassification (I-BERT model) LEDConfig configuration class: LEDForSequenceClassification (LED model) LayoutLMConfig configuration class: LayoutLMForSequenceClassification (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2ForSequenceClassification (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForSequenceClassification (LayoutLMv3 model) LiltConfig configuration class: LiltForSequenceClassification (LiLT model) LlamaConfig configuration class: LlamaForSequenceClassification (LLaMA model) LongformerConfig configuration class: LongformerForSequenceClassification (Longformer model) LukeConfig configuration class: LukeForSequenceClassification (LUKE model) MBartConfig configuration class: MBartForSequenceClassification (mBART model) MPNetConfig configuration class: MPNetForSequenceClassification (MPNet model) MT5Config configuration class: MT5ForSequenceClassification (MT5 model) MarkupLMConfig configuration class: MarkupLMForSequenceClassification (MarkupLM model) MegaConfig configuration class: MegaForSequenceClassification (MEGA model) MegatronBertConfig configuration class: MegatronBertForSequenceClassification (Megatron-BERT model) MistralConfig configuration class: MistralForSequenceClassification (Mistral model) MobileBertConfig configuration class: MobileBertForSequenceClassification (MobileBERT model) MptConfig configuration class: MptForSequenceClassification (MPT model) MraConfig configuration class: MraForSequenceClassification (MRA model) MvpConfig configuration class: MvpForSequenceClassification (MVP model) NezhaConfig configuration class: NezhaForSequenceClassification (Nezha model) NystromformerConfig configuration class: NystromformerForSequenceClassification (Nyströmformer model) OPTConfig configuration class: OPTForSequenceClassification (OPT model) OpenAIGPTConfig configuration class: OpenAIGPTForSequenceClassification (OpenAI GPT model) OpenLlamaConfig configuration class: OpenLlamaForSequenceClassification (OpenLlama model) PLBartConfig configuration class: PLBartForSequenceClassification (PLBart model) PerceiverConfig configuration class: PerceiverForSequenceClassification (Perceiver model) PersimmonConfig configuration class: PersimmonForSequenceClassification (Persimmon model) QDQBertConfig configuration class: QDQBertForSequenceClassification (QDQBert model) ReformerConfig configuration class: ReformerForSequenceClassification (Reformer model) RemBertConfig configuration class: RemBertForSequenceClassification (RemBERT model) RoCBertConfig configuration class: RoCBertForSequenceClassification (RoCBert model) RoFormerConfig configuration class: RoFormerForSequenceClassification (RoFormer model) RobertaConfig configuration class: RobertaForSequenceClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForSequenceClassification (SqueezeBERT model) T5Config configuration class: T5ForSequenceClassification (T5 model) TapasConfig configuration class: TapasForSequenceClassification (TAPAS model) TransfoXLConfig configuration class: TransfoXLForSequenceClassification (Transformer-XL model) UMT5Config configuration class: UMT5ForSequenceClassification (UMT5 model) XLMConfig configuration class: XLMForSequenceClassification (XLM model) XLMRobertaConfig configuration class: XLMRobertaForSequenceClassification (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForSequenceClassification (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForSequenceClassification (XLNet model) XmodConfig configuration class: XmodForSequenceClassification (X-MOD model) YosoConfig configuration class: YosoForSequenceClassification (YOSO model) Instantiates one of the model classes of the library (with a sequence classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForSequenceClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForSequenceClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertForSequenceClassification (ALBERT model) bart — BartForSequenceClassification (BART model) bert — BertForSequenceClassification (BERT model) big_bird — BigBirdForSequenceClassification (BigBird model) bigbird_pegasus — BigBirdPegasusForSequenceClassification (BigBird-Pegasus model) biogpt — BioGptForSequenceClassification (BioGpt model) bloom — BloomForSequenceClassification (BLOOM model) camembert — CamembertForSequenceClassification (CamemBERT model) canine — CanineForSequenceClassification (CANINE model) code_llama — LlamaForSequenceClassification (CodeLlama model) convbert — ConvBertForSequenceClassification (ConvBERT model) ctrl — CTRLForSequenceClassification (CTRL model) data2vec-text — Data2VecTextForSequenceClassification (Data2VecText model) deberta — DebertaForSequenceClassification (DeBERTa model) deberta-v2 — DebertaV2ForSequenceClassification (DeBERTa-v2 model) distilbert — DistilBertForSequenceClassification (DistilBERT model) electra — ElectraForSequenceClassification (ELECTRA model) ernie — ErnieForSequenceClassification (ERNIE model) ernie_m — ErnieMForSequenceClassification (ErnieM model) esm — EsmForSequenceClassification (ESM model) falcon — FalconForSequenceClassification (Falcon model) flaubert — FlaubertForSequenceClassification (FlauBERT model) fnet — FNetForSequenceClassification (FNet model) funnel — FunnelForSequenceClassification (Funnel Transformer model) gpt-sw3 — GPT2ForSequenceClassification (GPT-Sw3 model) gpt2 — GPT2ForSequenceClassification (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForSequenceClassification (GPTBigCode model) gpt_neo — GPTNeoForSequenceClassification (GPT Neo model) gpt_neox — GPTNeoXForSequenceClassification (GPT NeoX model) gptj — GPTJForSequenceClassification (GPT-J model) ibert — IBertForSequenceClassification (I-BERT model) layoutlm — LayoutLMForSequenceClassification (LayoutLM model) layoutlmv2 — LayoutLMv2ForSequenceClassification (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForSequenceClassification (LayoutLMv3 model) led — LEDForSequenceClassification (LED model) lilt — LiltForSequenceClassification (LiLT model) llama — LlamaForSequenceClassification (LLaMA model) longformer — LongformerForSequenceClassification (Longformer model) luke — LukeForSequenceClassification (LUKE model) markuplm — MarkupLMForSequenceClassification (MarkupLM model) mbart — MBartForSequenceClassification (mBART model) mega — MegaForSequenceClassification (MEGA model) megatron-bert — MegatronBertForSequenceClassification (Megatron-BERT model) mistral — MistralForSequenceClassification (Mistral model) mobilebert — MobileBertForSequenceClassification (MobileBERT model) mpnet — MPNetForSequenceClassification (MPNet model) mpt — MptForSequenceClassification (MPT model) mra — MraForSequenceClassification (MRA model) mt5 — MT5ForSequenceClassification (MT5 model) mvp — MvpForSequenceClassification (MVP model) nezha — NezhaForSequenceClassification (Nezha model) nystromformer — NystromformerForSequenceClassification (Nyströmformer model) open-llama — OpenLlamaForSequenceClassification (OpenLlama model) openai-gpt — OpenAIGPTForSequenceClassification (OpenAI GPT model) opt — OPTForSequenceClassification (OPT model) perceiver — PerceiverForSequenceClassification (Perceiver model) persimmon — PersimmonForSequenceClassification (Persimmon model) plbart — PLBartForSequenceClassification (PLBart model) qdqbert — QDQBertForSequenceClassification (QDQBert model) reformer — ReformerForSequenceClassification (Reformer model) rembert — RemBertForSequenceClassification (RemBERT model) roberta — RobertaForSequenceClassification (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForSequenceClassification (RoCBert model) roformer — RoFormerForSequenceClassification (RoFormer model) squeezebert — SqueezeBertForSequenceClassification (SqueezeBERT model) t5 — T5ForSequenceClassification (T5 model) tapas — TapasForSequenceClassification (TAPAS model) transfo-xl — TransfoXLForSequenceClassification (Transformer-XL model) umt5 — UMT5ForSequenceClassification (UMT5 model) xlm — XLMForSequenceClassification (XLM model) xlm-roberta — XLMRobertaForSequenceClassification (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForSequenceClassification (XLM-RoBERTa-XL model) xlnet — XLNetForSequenceClassification (XLNet model) xmod — XmodForSequenceClassification (X-MOD model) yoso — YosoForSequenceClassification (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForSequenceClassification >>> >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForSequenceClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForSequenceClassification class transformers.TFAutoModelForSequenceClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForSequenceClassification (ALBERT model) BartConfig configuration class: TFBartForSequenceClassification (BART model) BertConfig configuration class: TFBertForSequenceClassification (BERT model) CTRLConfig configuration class: TFCTRLForSequenceClassification (CTRL model) CamembertConfig configuration class: TFCamembertForSequenceClassification (CamemBERT model) ConvBertConfig configuration class: TFConvBertForSequenceClassification (ConvBERT model) DebertaConfig configuration class: TFDebertaForSequenceClassification (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForSequenceClassification (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForSequenceClassification (DistilBERT model) ElectraConfig configuration class: TFElectraForSequenceClassification (ELECTRA model) EsmConfig configuration class: TFEsmForSequenceClassification (ESM model) FlaubertConfig configuration class: TFFlaubertForSequenceClassification (FlauBERT model) FunnelConfig configuration class: TFFunnelForSequenceClassification (Funnel Transformer model) GPT2Config configuration class: TFGPT2ForSequenceClassification (OpenAI GPT-2 model) GPTJConfig configuration class: TFGPTJForSequenceClassification (GPT-J model) LayoutLMConfig configuration class: TFLayoutLMForSequenceClassification (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3ForSequenceClassification (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerForSequenceClassification (Longformer model) MPNetConfig configuration class: TFMPNetForSequenceClassification (MPNet model) MobileBertConfig configuration class: TFMobileBertForSequenceClassification (MobileBERT model) OpenAIGPTConfig configuration class: TFOpenAIGPTForSequenceClassification (OpenAI GPT model) RemBertConfig configuration class: TFRemBertForSequenceClassification (RemBERT model) RoFormerConfig configuration class: TFRoFormerForSequenceClassification (RoFormer model) RobertaConfig configuration class: TFRobertaForSequenceClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) TapasConfig configuration class: TFTapasForSequenceClassification (TAPAS model) TransfoXLConfig configuration class: TFTransfoXLForSequenceClassification (Transformer-XL model) XLMConfig configuration class: TFXLMForSequenceClassification (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForSequenceClassification (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForSequenceClassification (XLNet model) Instantiates one of the model classes of the library (with a sequence classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForSequenceClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForSequenceClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertForSequenceClassification (ALBERT model) bart — TFBartForSequenceClassification (BART model) bert — TFBertForSequenceClassification (BERT model) camembert — TFCamembertForSequenceClassification (CamemBERT model) convbert — TFConvBertForSequenceClassification (ConvBERT model) ctrl — TFCTRLForSequenceClassification (CTRL model) deberta — TFDebertaForSequenceClassification (DeBERTa model) deberta-v2 — TFDebertaV2ForSequenceClassification (DeBERTa-v2 model) distilbert — TFDistilBertForSequenceClassification (DistilBERT model) electra — TFElectraForSequenceClassification (ELECTRA model) esm — TFEsmForSequenceClassification (ESM model) flaubert — TFFlaubertForSequenceClassification (FlauBERT model) funnel — TFFunnelForSequenceClassification (Funnel Transformer model) gpt-sw3 — TFGPT2ForSequenceClassification (GPT-Sw3 model) gpt2 — TFGPT2ForSequenceClassification (OpenAI GPT-2 model) gptj — TFGPTJForSequenceClassification (GPT-J model) layoutlm — TFLayoutLMForSequenceClassification (LayoutLM model) layoutlmv3 — TFLayoutLMv3ForSequenceClassification (LayoutLMv3 model) longformer — TFLongformerForSequenceClassification (Longformer model) mobilebert — TFMobileBertForSequenceClassification (MobileBERT model) mpnet — TFMPNetForSequenceClassification (MPNet model) openai-gpt — TFOpenAIGPTForSequenceClassification (OpenAI GPT model) rembert — TFRemBertForSequenceClassification (RemBERT model) roberta — TFRobertaForSequenceClassification (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForSequenceClassification (RoFormer model) tapas — TFTapasForSequenceClassification (TAPAS model) transfo-xl — TFTransfoXLForSequenceClassification (Transformer-XL model) xlm — TFXLMForSequenceClassification (XLM model) xlm-roberta — TFXLMRobertaForSequenceClassification (XLM-RoBERTa model) xlnet — TFXLNetForSequenceClassification (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModelForSequenceClassification >>> >>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForSequenceClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForSequenceClassification class transformers.FlaxAutoModelForSequenceClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForSequenceClassification (ALBERT model) BartConfig configuration class: FlaxBartForSequenceClassification (BART model) BertConfig configuration class: FlaxBertForSequenceClassification (BERT model) BigBirdConfig configuration class: FlaxBigBirdForSequenceClassification (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForSequenceClassification (DistilBERT model) ElectraConfig configuration class: FlaxElectraForSequenceClassification (ELECTRA model) MBartConfig configuration class: FlaxMBartForSequenceClassification (mBART model) RoFormerConfig configuration class: FlaxRoFormerForSequenceClassification (RoFormer model) RobertaConfig configuration class: FlaxRobertaForSequenceClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForSequenceClassification (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a sequence classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForSequenceClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertForSequenceClassification (ALBERT model) bart — FlaxBartForSequenceClassification (BART model) bert — FlaxBertForSequenceClassification (BERT model) big_bird — FlaxBigBirdForSequenceClassification (BigBird model) distilbert — FlaxDistilBertForSequenceClassification (DistilBERT model) electra — FlaxElectraForSequenceClassification (ELECTRA model) mbart — FlaxMBartForSequenceClassification (mBART model) roberta — FlaxRobertaForSequenceClassification (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForSequenceClassification (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForSequenceClassification (RoFormer model) xlm-roberta — FlaxXLMRobertaForSequenceClassification (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification >>> >>> model = FlaxAutoModelForSequenceClassification.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForSequenceClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForSequenceClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForMultipleChoice class transformers.AutoModelForMultipleChoice < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForMultipleChoice (ALBERT model) BertConfig configuration class: BertForMultipleChoice (BERT model) BigBirdConfig configuration class: BigBirdForMultipleChoice (BigBird model) CamembertConfig configuration class: CamembertForMultipleChoice (CamemBERT model) CanineConfig configuration class: CanineForMultipleChoice (CANINE model) ConvBertConfig configuration class: ConvBertForMultipleChoice (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForMultipleChoice (Data2VecText model) DebertaV2Config configuration class: DebertaV2ForMultipleChoice (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForMultipleChoice (DistilBERT model) ElectraConfig configuration class: ElectraForMultipleChoice (ELECTRA model) ErnieConfig configuration class: ErnieForMultipleChoice (ERNIE model) ErnieMConfig configuration class: ErnieMForMultipleChoice (ErnieM model) FNetConfig configuration class: FNetForMultipleChoice (FNet model) FlaubertConfig configuration class: FlaubertForMultipleChoice (FlauBERT model) FunnelConfig configuration class: FunnelForMultipleChoice (Funnel Transformer model) IBertConfig configuration class: IBertForMultipleChoice (I-BERT model) LongformerConfig configuration class: LongformerForMultipleChoice (Longformer model) LukeConfig configuration class: LukeForMultipleChoice (LUKE model) MPNetConfig configuration class: MPNetForMultipleChoice (MPNet model) MegaConfig configuration class: MegaForMultipleChoice (MEGA model) MegatronBertConfig configuration class: MegatronBertForMultipleChoice (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForMultipleChoice (MobileBERT model) MraConfig configuration class: MraForMultipleChoice (MRA model) NezhaConfig configuration class: NezhaForMultipleChoice (Nezha model) NystromformerConfig configuration class: NystromformerForMultipleChoice (Nyströmformer model) QDQBertConfig configuration class: QDQBertForMultipleChoice (QDQBert model) RemBertConfig configuration class: RemBertForMultipleChoice (RemBERT model) RoCBertConfig configuration class: RoCBertForMultipleChoice (RoCBert model) RoFormerConfig configuration class: RoFormerForMultipleChoice (RoFormer model) RobertaConfig configuration class: RobertaForMultipleChoice (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForMultipleChoice (SqueezeBERT model) XLMConfig configuration class: XLMForMultipleChoice (XLM model) XLMRobertaConfig configuration class: XLMRobertaForMultipleChoice (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForMultipleChoice (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForMultipleChoice (XLNet model) XmodConfig configuration class: XmodForMultipleChoice (X-MOD model) YosoConfig configuration class: YosoForMultipleChoice (YOSO model) Instantiates one of the model classes of the library (with a multiple choice head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForMultipleChoice >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForMultipleChoice.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertForMultipleChoice (ALBERT model) bert — BertForMultipleChoice (BERT model) big_bird — BigBirdForMultipleChoice (BigBird model) camembert — CamembertForMultipleChoice (CamemBERT model) canine — CanineForMultipleChoice (CANINE model) convbert — ConvBertForMultipleChoice (ConvBERT model) data2vec-text — Data2VecTextForMultipleChoice (Data2VecText model) deberta-v2 — DebertaV2ForMultipleChoice (DeBERTa-v2 model) distilbert — DistilBertForMultipleChoice (DistilBERT model) electra — ElectraForMultipleChoice (ELECTRA model) ernie — ErnieForMultipleChoice (ERNIE model) ernie_m — ErnieMForMultipleChoice (ErnieM model) flaubert — FlaubertForMultipleChoice (FlauBERT model) fnet — FNetForMultipleChoice (FNet model) funnel — FunnelForMultipleChoice (Funnel Transformer model) ibert — IBertForMultipleChoice (I-BERT model) longformer — LongformerForMultipleChoice (Longformer model) luke — LukeForMultipleChoice (LUKE model) mega — MegaForMultipleChoice (MEGA model) megatron-bert — MegatronBertForMultipleChoice (Megatron-BERT model) mobilebert — MobileBertForMultipleChoice (MobileBERT model) mpnet — MPNetForMultipleChoice (MPNet model) mra — MraForMultipleChoice (MRA model) nezha — NezhaForMultipleChoice (Nezha model) nystromformer — NystromformerForMultipleChoice (Nyströmformer model) qdqbert — QDQBertForMultipleChoice (QDQBert model) rembert — RemBertForMultipleChoice (RemBERT model) roberta — RobertaForMultipleChoice (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForMultipleChoice (RoCBert model) roformer — RoFormerForMultipleChoice (RoFormer model) squeezebert — SqueezeBertForMultipleChoice (SqueezeBERT model) xlm — XLMForMultipleChoice (XLM model) xlm-roberta — XLMRobertaForMultipleChoice (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForMultipleChoice (XLM-RoBERTa-XL model) xlnet — XLNetForMultipleChoice (XLNet model) xmod — XmodForMultipleChoice (X-MOD model) yoso — YosoForMultipleChoice (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForMultipleChoice >>> >>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForMultipleChoice.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForMultipleChoice class transformers.TFAutoModelForMultipleChoice < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForMultipleChoice (ALBERT model) BertConfig configuration class: TFBertForMultipleChoice (BERT model) CamembertConfig configuration class: TFCamembertForMultipleChoice (CamemBERT model) ConvBertConfig configuration class: TFConvBertForMultipleChoice (ConvBERT model) DebertaV2Config configuration class: TFDebertaV2ForMultipleChoice (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForMultipleChoice (DistilBERT model) ElectraConfig configuration class: TFElectraForMultipleChoice (ELECTRA model) FlaubertConfig configuration class: TFFlaubertForMultipleChoice (FlauBERT model) FunnelConfig configuration class: TFFunnelForMultipleChoice (Funnel Transformer model) LongformerConfig configuration class: TFLongformerForMultipleChoice (Longformer model) MPNetConfig configuration class: TFMPNetForMultipleChoice (MPNet model) MobileBertConfig configuration class: TFMobileBertForMultipleChoice (MobileBERT model) RemBertConfig configuration class: TFRemBertForMultipleChoice (RemBERT model) RoFormerConfig configuration class: TFRoFormerForMultipleChoice (RoFormer model) RobertaConfig configuration class: TFRobertaForMultipleChoice (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) XLMConfig configuration class: TFXLMForMultipleChoice (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForMultipleChoice (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForMultipleChoice (XLNet model) Instantiates one of the model classes of the library (with a multiple choice head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForMultipleChoice >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForMultipleChoice.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertForMultipleChoice (ALBERT model) bert — TFBertForMultipleChoice (BERT model) camembert — TFCamembertForMultipleChoice (CamemBERT model) convbert — TFConvBertForMultipleChoice (ConvBERT model) deberta-v2 — TFDebertaV2ForMultipleChoice (DeBERTa-v2 model) distilbert — TFDistilBertForMultipleChoice (DistilBERT model) electra — TFElectraForMultipleChoice (ELECTRA model) flaubert — TFFlaubertForMultipleChoice (FlauBERT model) funnel — TFFunnelForMultipleChoice (Funnel Transformer model) longformer — TFLongformerForMultipleChoice (Longformer model) mobilebert — TFMobileBertForMultipleChoice (MobileBERT model) mpnet — TFMPNetForMultipleChoice (MPNet model) rembert — TFRemBertForMultipleChoice (RemBERT model) roberta — TFRobertaForMultipleChoice (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForMultipleChoice (RoFormer model) xlm — TFXLMForMultipleChoice (XLM model) xlm-roberta — TFXLMRobertaForMultipleChoice (XLM-RoBERTa model) xlnet — TFXLNetForMultipleChoice (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModelForMultipleChoice >>> >>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForMultipleChoice.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForMultipleChoice class transformers.FlaxAutoModelForMultipleChoice < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForMultipleChoice (ALBERT model) BertConfig configuration class: FlaxBertForMultipleChoice (BERT model) BigBirdConfig configuration class: FlaxBigBirdForMultipleChoice (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForMultipleChoice (DistilBERT model) ElectraConfig configuration class: FlaxElectraForMultipleChoice (ELECTRA model) RoFormerConfig configuration class: FlaxRoFormerForMultipleChoice (RoFormer model) RobertaConfig configuration class: FlaxRobertaForMultipleChoice (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForMultipleChoice (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a multiple choice head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForMultipleChoice.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertForMultipleChoice (ALBERT model) bert — FlaxBertForMultipleChoice (BERT model) big_bird — FlaxBigBirdForMultipleChoice (BigBird model) distilbert — FlaxDistilBertForMultipleChoice (DistilBERT model) electra — FlaxElectraForMultipleChoice (ELECTRA model) roberta — FlaxRobertaForMultipleChoice (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForMultipleChoice (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForMultipleChoice (RoFormer model) xlm-roberta — FlaxXLMRobertaForMultipleChoice (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice >>> >>> model = FlaxAutoModelForMultipleChoice.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForMultipleChoice.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForMultipleChoice.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForNextSentencePrediction class transformers.AutoModelForNextSentencePrediction < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: BertForNextSentencePrediction (BERT model) ErnieConfig configuration class: ErnieForNextSentencePrediction (ERNIE model) FNetConfig configuration class: FNetForNextSentencePrediction (FNet model) MegatronBertConfig configuration class: MegatronBertForNextSentencePrediction (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForNextSentencePrediction (MobileBERT model) NezhaConfig configuration class: NezhaForNextSentencePrediction (Nezha model) QDQBertConfig configuration class: QDQBertForNextSentencePrediction (QDQBert model) Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForNextSentencePrediction >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForNextSentencePrediction.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bert — BertForNextSentencePrediction (BERT model) ernie — ErnieForNextSentencePrediction (ERNIE model) fnet — FNetForNextSentencePrediction (FNet model) megatron-bert — MegatronBertForNextSentencePrediction (Megatron-BERT model) mobilebert — MobileBertForNextSentencePrediction (MobileBERT model) nezha — NezhaForNextSentencePrediction (Nezha model) qdqbert — QDQBertForNextSentencePrediction (QDQBert model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForNextSentencePrediction >>> >>> model = AutoModelForNextSentencePrediction.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForNextSentencePrediction.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForNextSentencePrediction.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForNextSentencePrediction class transformers.TFAutoModelForNextSentencePrediction < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: TFBertForNextSentencePrediction (BERT model) MobileBertConfig configuration class: TFMobileBertForNextSentencePrediction (MobileBERT model) Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForNextSentencePrediction.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bert — TFBertForNextSentencePrediction (BERT model) mobilebert — TFMobileBertForNextSentencePrediction (MobileBERT model) Examples: >>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction >>> >>> model = TFAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForNextSentencePrediction.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForNextSentencePrediction class transformers.FlaxAutoModelForNextSentencePrediction < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BertConfig configuration class: FlaxBertForNextSentencePrediction (BERT model) Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForNextSentencePrediction.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: bert — FlaxBertForNextSentencePrediction (BERT model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction >>> >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForTokenClassification class transformers.AutoModelForTokenClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForTokenClassification (ALBERT model) BertConfig configuration class: BertForTokenClassification (BERT model) BigBirdConfig configuration class: BigBirdForTokenClassification (BigBird model) BioGptConfig configuration class: BioGptForTokenClassification (BioGpt model) BloomConfig configuration class: BloomForTokenClassification (BLOOM model) BrosConfig configuration class: BrosForTokenClassification (BROS model) CamembertConfig configuration class: CamembertForTokenClassification (CamemBERT model) CanineConfig configuration class: CanineForTokenClassification (CANINE model) ConvBertConfig configuration class: ConvBertForTokenClassification (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForTokenClassification (Data2VecText model) DebertaConfig configuration class: DebertaForTokenClassification (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForTokenClassification (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForTokenClassification (DistilBERT model) ElectraConfig configuration class: ElectraForTokenClassification (ELECTRA model) ErnieConfig configuration class: ErnieForTokenClassification (ERNIE model) ErnieMConfig configuration class: ErnieMForTokenClassification (ErnieM model) EsmConfig configuration class: EsmForTokenClassification (ESM model) FNetConfig configuration class: FNetForTokenClassification (FNet model) FalconConfig configuration class: FalconForTokenClassification (Falcon model) FlaubertConfig configuration class: FlaubertForTokenClassification (FlauBERT model) FunnelConfig configuration class: FunnelForTokenClassification (Funnel Transformer model) GPT2Config configuration class: GPT2ForTokenClassification (OpenAI GPT-2 model) GPTBigCodeConfig configuration class: GPTBigCodeForTokenClassification (GPTBigCode model) GPTNeoConfig configuration class: GPTNeoForTokenClassification (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForTokenClassification (GPT NeoX model) IBertConfig configuration class: IBertForTokenClassification (I-BERT model) LayoutLMConfig configuration class: LayoutLMForTokenClassification (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2ForTokenClassification (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForTokenClassification (LayoutLMv3 model) LiltConfig configuration class: LiltForTokenClassification (LiLT model) LongformerConfig configuration class: LongformerForTokenClassification (Longformer model) LukeConfig configuration class: LukeForTokenClassification (LUKE model) MPNetConfig configuration class: MPNetForTokenClassification (MPNet model) MarkupLMConfig configuration class: MarkupLMForTokenClassification (MarkupLM model) MegaConfig configuration class: MegaForTokenClassification (MEGA model) MegatronBertConfig configuration class: MegatronBertForTokenClassification (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForTokenClassification (MobileBERT model) MptConfig configuration class: MptForTokenClassification (MPT model) MraConfig configuration class: MraForTokenClassification (MRA model) NezhaConfig configuration class: NezhaForTokenClassification (Nezha model) NystromformerConfig configuration class: NystromformerForTokenClassification (Nyströmformer model) QDQBertConfig configuration class: QDQBertForTokenClassification (QDQBert model) RemBertConfig configuration class: RemBertForTokenClassification (RemBERT model) RoCBertConfig configuration class: RoCBertForTokenClassification (RoCBert model) RoFormerConfig configuration class: RoFormerForTokenClassification (RoFormer model) RobertaConfig configuration class: RobertaForTokenClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) SqueezeBertConfig configuration class: SqueezeBertForTokenClassification (SqueezeBERT model) XLMConfig configuration class: XLMForTokenClassification (XLM model) XLMRobertaConfig configuration class: XLMRobertaForTokenClassification (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForTokenClassification (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForTokenClassification (XLNet model) XmodConfig configuration class: XmodForTokenClassification (X-MOD model) YosoConfig configuration class: YosoForTokenClassification (YOSO model) Instantiates one of the model classes of the library (with a token classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForTokenClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForTokenClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a token classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertForTokenClassification (ALBERT model) bert — BertForTokenClassification (BERT model) big_bird — BigBirdForTokenClassification (BigBird model) biogpt — BioGptForTokenClassification (BioGpt model) bloom — BloomForTokenClassification (BLOOM model) bros — BrosForTokenClassification (BROS model) camembert — CamembertForTokenClassification (CamemBERT model) canine — CanineForTokenClassification (CANINE model) convbert — ConvBertForTokenClassification (ConvBERT model) data2vec-text — Data2VecTextForTokenClassification (Data2VecText model) deberta — DebertaForTokenClassification (DeBERTa model) deberta-v2 — DebertaV2ForTokenClassification (DeBERTa-v2 model) distilbert — DistilBertForTokenClassification (DistilBERT model) electra — ElectraForTokenClassification (ELECTRA model) ernie — ErnieForTokenClassification (ERNIE model) ernie_m — ErnieMForTokenClassification (ErnieM model) esm — EsmForTokenClassification (ESM model) falcon — FalconForTokenClassification (Falcon model) flaubert — FlaubertForTokenClassification (FlauBERT model) fnet — FNetForTokenClassification (FNet model) funnel — FunnelForTokenClassification (Funnel Transformer model) gpt-sw3 — GPT2ForTokenClassification (GPT-Sw3 model) gpt2 — GPT2ForTokenClassification (OpenAI GPT-2 model) gpt_bigcode — GPTBigCodeForTokenClassification (GPTBigCode model) gpt_neo — GPTNeoForTokenClassification (GPT Neo model) gpt_neox — GPTNeoXForTokenClassification (GPT NeoX model) ibert — IBertForTokenClassification (I-BERT model) layoutlm — LayoutLMForTokenClassification (LayoutLM model) layoutlmv2 — LayoutLMv2ForTokenClassification (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForTokenClassification (LayoutLMv3 model) lilt — LiltForTokenClassification (LiLT model) longformer — LongformerForTokenClassification (Longformer model) luke — LukeForTokenClassification (LUKE model) markuplm — MarkupLMForTokenClassification (MarkupLM model) mega — MegaForTokenClassification (MEGA model) megatron-bert — MegatronBertForTokenClassification (Megatron-BERT model) mobilebert — MobileBertForTokenClassification (MobileBERT model) mpnet — MPNetForTokenClassification (MPNet model) mpt — MptForTokenClassification (MPT model) mra — MraForTokenClassification (MRA model) nezha — NezhaForTokenClassification (Nezha model) nystromformer — NystromformerForTokenClassification (Nyströmformer model) qdqbert — QDQBertForTokenClassification (QDQBert model) rembert — RemBertForTokenClassification (RemBERT model) roberta — RobertaForTokenClassification (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForTokenClassification (RoCBert model) roformer — RoFormerForTokenClassification (RoFormer model) squeezebert — SqueezeBertForTokenClassification (SqueezeBERT model) xlm — XLMForTokenClassification (XLM model) xlm-roberta — XLMRobertaForTokenClassification (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForTokenClassification (XLM-RoBERTa-XL model) xlnet — XLNetForTokenClassification (XLNet model) xmod — XmodForTokenClassification (X-MOD model) yoso — YosoForTokenClassification (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForTokenClassification >>> >>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForTokenClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForTokenClassification class transformers.TFAutoModelForTokenClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForTokenClassification (ALBERT model) BertConfig configuration class: TFBertForTokenClassification (BERT model) CamembertConfig configuration class: TFCamembertForTokenClassification (CamemBERT model) ConvBertConfig configuration class: TFConvBertForTokenClassification (ConvBERT model) DebertaConfig configuration class: TFDebertaForTokenClassification (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForTokenClassification (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForTokenClassification (DistilBERT model) ElectraConfig configuration class: TFElectraForTokenClassification (ELECTRA model) EsmConfig configuration class: TFEsmForTokenClassification (ESM model) FlaubertConfig configuration class: TFFlaubertForTokenClassification (FlauBERT model) FunnelConfig configuration class: TFFunnelForTokenClassification (Funnel Transformer model) LayoutLMConfig configuration class: TFLayoutLMForTokenClassification (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3ForTokenClassification (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerForTokenClassification (Longformer model) MPNetConfig configuration class: TFMPNetForTokenClassification (MPNet model) MobileBertConfig configuration class: TFMobileBertForTokenClassification (MobileBERT model) RemBertConfig configuration class: TFRemBertForTokenClassification (RemBERT model) RoFormerConfig configuration class: TFRoFormerForTokenClassification (RoFormer model) RobertaConfig configuration class: TFRobertaForTokenClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) XLMConfig configuration class: TFXLMForTokenClassification (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForTokenClassification (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForTokenClassification (XLNet model) Instantiates one of the model classes of the library (with a token classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForTokenClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForTokenClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a token classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertForTokenClassification (ALBERT model) bert — TFBertForTokenClassification (BERT model) camembert — TFCamembertForTokenClassification (CamemBERT model) convbert — TFConvBertForTokenClassification (ConvBERT model) deberta — TFDebertaForTokenClassification (DeBERTa model) deberta-v2 — TFDebertaV2ForTokenClassification (DeBERTa-v2 model) distilbert — TFDistilBertForTokenClassification (DistilBERT model) electra — TFElectraForTokenClassification (ELECTRA model) esm — TFEsmForTokenClassification (ESM model) flaubert — TFFlaubertForTokenClassification (FlauBERT model) funnel — TFFunnelForTokenClassification (Funnel Transformer model) layoutlm — TFLayoutLMForTokenClassification (LayoutLM model) layoutlmv3 — TFLayoutLMv3ForTokenClassification (LayoutLMv3 model) longformer — TFLongformerForTokenClassification (Longformer model) mobilebert — TFMobileBertForTokenClassification (MobileBERT model) mpnet — TFMPNetForTokenClassification (MPNet model) rembert — TFRemBertForTokenClassification (RemBERT model) roberta — TFRobertaForTokenClassification (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForTokenClassification (RoFormer model) xlm — TFXLMForTokenClassification (XLM model) xlm-roberta — TFXLMRobertaForTokenClassification (XLM-RoBERTa model) xlnet — TFXLNetForTokenClassification (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModelForTokenClassification >>> >>> model = TFAutoModelForTokenClassification.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForTokenClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForTokenClassification class transformers.FlaxAutoModelForTokenClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForTokenClassification (ALBERT model) BertConfig configuration class: FlaxBertForTokenClassification (BERT model) BigBirdConfig configuration class: FlaxBigBirdForTokenClassification (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForTokenClassification (DistilBERT model) ElectraConfig configuration class: FlaxElectraForTokenClassification (ELECTRA model) RoFormerConfig configuration class: FlaxRoFormerForTokenClassification (RoFormer model) RobertaConfig configuration class: FlaxRobertaForTokenClassification (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForTokenClassification (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a token classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForTokenClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a token classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertForTokenClassification (ALBERT model) bert — FlaxBertForTokenClassification (BERT model) big_bird — FlaxBigBirdForTokenClassification (BigBird model) distilbert — FlaxDistilBertForTokenClassification (DistilBERT model) electra — FlaxElectraForTokenClassification (ELECTRA model) roberta — FlaxRobertaForTokenClassification (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForTokenClassification (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForTokenClassification (RoFormer model) xlm-roberta — FlaxXLMRobertaForTokenClassification (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification >>> >>> model = FlaxAutoModelForTokenClassification.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForTokenClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForQuestionAnswering class transformers.AutoModelForQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: AlbertForQuestionAnswering (ALBERT model) BartConfig configuration class: BartForQuestionAnswering (BART model) BertConfig configuration class: BertForQuestionAnswering (BERT model) BigBirdConfig configuration class: BigBirdForQuestionAnswering (BigBird model) BigBirdPegasusConfig configuration class: BigBirdPegasusForQuestionAnswering (BigBird-Pegasus model) BloomConfig configuration class: BloomForQuestionAnswering (BLOOM model) CamembertConfig configuration class: CamembertForQuestionAnswering (CamemBERT model) CanineConfig configuration class: CanineForQuestionAnswering (CANINE model) ConvBertConfig configuration class: ConvBertForQuestionAnswering (ConvBERT model) Data2VecTextConfig configuration class: Data2VecTextForQuestionAnswering (Data2VecText model) DebertaConfig configuration class: DebertaForQuestionAnswering (DeBERTa model) DebertaV2Config configuration class: DebertaV2ForQuestionAnswering (DeBERTa-v2 model) DistilBertConfig configuration class: DistilBertForQuestionAnswering (DistilBERT model) ElectraConfig configuration class: ElectraForQuestionAnswering (ELECTRA model) ErnieConfig configuration class: ErnieForQuestionAnswering (ERNIE model) ErnieMConfig configuration class: ErnieMForQuestionAnswering (ErnieM model) FNetConfig configuration class: FNetForQuestionAnswering (FNet model) FalconConfig configuration class: FalconForQuestionAnswering (Falcon model) FlaubertConfig configuration class: FlaubertForQuestionAnsweringSimple (FlauBERT model) FunnelConfig configuration class: FunnelForQuestionAnswering (Funnel Transformer model) GPT2Config configuration class: GPT2ForQuestionAnswering (OpenAI GPT-2 model) GPTJConfig configuration class: GPTJForQuestionAnswering (GPT-J model) GPTNeoConfig configuration class: GPTNeoForQuestionAnswering (GPT Neo model) GPTNeoXConfig configuration class: GPTNeoXForQuestionAnswering (GPT NeoX model) IBertConfig configuration class: IBertForQuestionAnswering (I-BERT model) LEDConfig configuration class: LEDForQuestionAnswering (LED model) LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) LiltConfig configuration class: LiltForQuestionAnswering (LiLT model) LongformerConfig configuration class: LongformerForQuestionAnswering (Longformer model) LukeConfig configuration class: LukeForQuestionAnswering (LUKE model) LxmertConfig configuration class: LxmertForQuestionAnswering (LXMERT model) MBartConfig configuration class: MBartForQuestionAnswering (mBART model) MPNetConfig configuration class: MPNetForQuestionAnswering (MPNet model) MT5Config configuration class: MT5ForQuestionAnswering (MT5 model) MarkupLMConfig configuration class: MarkupLMForQuestionAnswering (MarkupLM model) MegaConfig configuration class: MegaForQuestionAnswering (MEGA model) MegatronBertConfig configuration class: MegatronBertForQuestionAnswering (Megatron-BERT model) MobileBertConfig configuration class: MobileBertForQuestionAnswering (MobileBERT model) MptConfig configuration class: MptForQuestionAnswering (MPT model) MraConfig configuration class: MraForQuestionAnswering (MRA model) MvpConfig configuration class: MvpForQuestionAnswering (MVP model) NezhaConfig configuration class: NezhaForQuestionAnswering (Nezha model) NystromformerConfig configuration class: NystromformerForQuestionAnswering (Nyströmformer model) OPTConfig configuration class: OPTForQuestionAnswering (OPT model) QDQBertConfig configuration class: QDQBertForQuestionAnswering (QDQBert model) ReformerConfig configuration class: ReformerForQuestionAnswering (Reformer model) RemBertConfig configuration class: RemBertForQuestionAnswering (RemBERT model) RoCBertConfig configuration class: RoCBertForQuestionAnswering (RoCBert model) RoFormerConfig configuration class: RoFormerForQuestionAnswering (RoFormer model) RobertaConfig configuration class: RobertaForQuestionAnswering (RoBERTa model) RobertaPreLayerNormConfig configuration class: RobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) SplinterConfig configuration class: SplinterForQuestionAnswering (Splinter model) SqueezeBertConfig configuration class: SqueezeBertForQuestionAnswering (SqueezeBERT model) T5Config configuration class: T5ForQuestionAnswering (T5 model) UMT5Config configuration class: UMT5ForQuestionAnswering (UMT5 model) XLMConfig configuration class: XLMForQuestionAnsweringSimple (XLM model) XLMRobertaConfig configuration class: XLMRobertaForQuestionAnswering (XLM-RoBERTa model) XLMRobertaXLConfig configuration class: XLMRobertaXLForQuestionAnswering (XLM-RoBERTa-XL model) XLNetConfig configuration class: XLNetForQuestionAnsweringSimple (XLNet model) XmodConfig configuration class: XmodForQuestionAnswering (X-MOD model) YosoConfig configuration class: YosoForQuestionAnswering (YOSO model) Instantiates one of the model classes of the library (with a question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — AlbertForQuestionAnswering (ALBERT model) bart — BartForQuestionAnswering (BART model) bert — BertForQuestionAnswering (BERT model) big_bird — BigBirdForQuestionAnswering (BigBird model) bigbird_pegasus — BigBirdPegasusForQuestionAnswering (BigBird-Pegasus model) bloom — BloomForQuestionAnswering (BLOOM model) camembert — CamembertForQuestionAnswering (CamemBERT model) canine — CanineForQuestionAnswering (CANINE model) convbert — ConvBertForQuestionAnswering (ConvBERT model) data2vec-text — Data2VecTextForQuestionAnswering (Data2VecText model) deberta — DebertaForQuestionAnswering (DeBERTa model) deberta-v2 — DebertaV2ForQuestionAnswering (DeBERTa-v2 model) distilbert — DistilBertForQuestionAnswering (DistilBERT model) electra — ElectraForQuestionAnswering (ELECTRA model) ernie — ErnieForQuestionAnswering (ERNIE model) ernie_m — ErnieMForQuestionAnswering (ErnieM model) falcon — FalconForQuestionAnswering (Falcon model) flaubert — FlaubertForQuestionAnsweringSimple (FlauBERT model) fnet — FNetForQuestionAnswering (FNet model) funnel — FunnelForQuestionAnswering (Funnel Transformer model) gpt2 — GPT2ForQuestionAnswering (OpenAI GPT-2 model) gpt_neo — GPTNeoForQuestionAnswering (GPT Neo model) gpt_neox — GPTNeoXForQuestionAnswering (GPT NeoX model) gptj — GPTJForQuestionAnswering (GPT-J model) ibert — IBertForQuestionAnswering (I-BERT model) layoutlmv2 — LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) led — LEDForQuestionAnswering (LED model) lilt — LiltForQuestionAnswering (LiLT model) longformer — LongformerForQuestionAnswering (Longformer model) luke — LukeForQuestionAnswering (LUKE model) lxmert — LxmertForQuestionAnswering (LXMERT model) markuplm — MarkupLMForQuestionAnswering (MarkupLM model) mbart — MBartForQuestionAnswering (mBART model) mega — MegaForQuestionAnswering (MEGA model) megatron-bert — MegatronBertForQuestionAnswering (Megatron-BERT model) mobilebert — MobileBertForQuestionAnswering (MobileBERT model) mpnet — MPNetForQuestionAnswering (MPNet model) mpt — MptForQuestionAnswering (MPT model) mra — MraForQuestionAnswering (MRA model) mt5 — MT5ForQuestionAnswering (MT5 model) mvp — MvpForQuestionAnswering (MVP model) nezha — NezhaForQuestionAnswering (Nezha model) nystromformer — NystromformerForQuestionAnswering (Nyströmformer model) opt — OPTForQuestionAnswering (OPT model) qdqbert — QDQBertForQuestionAnswering (QDQBert model) reformer — ReformerForQuestionAnswering (Reformer model) rembert — RemBertForQuestionAnswering (RemBERT model) roberta — RobertaForQuestionAnswering (RoBERTa model) roberta-prelayernorm — RobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) roc_bert — RoCBertForQuestionAnswering (RoCBert model) roformer — RoFormerForQuestionAnswering (RoFormer model) splinter — SplinterForQuestionAnswering (Splinter model) squeezebert — SqueezeBertForQuestionAnswering (SqueezeBERT model) t5 — T5ForQuestionAnswering (T5 model) umt5 — UMT5ForQuestionAnswering (UMT5 model) xlm — XLMForQuestionAnsweringSimple (XLM model) xlm-roberta — XLMRobertaForQuestionAnswering (XLM-RoBERTa model) xlm-roberta-xl — XLMRobertaXLForQuestionAnswering (XLM-RoBERTa-XL model) xlnet — XLNetForQuestionAnsweringSimple (XLNet model) xmod — XmodForQuestionAnswering (X-MOD model) yoso — YosoForQuestionAnswering (YOSO model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForQuestionAnswering >>> >>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForQuestionAnswering.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForQuestionAnswering class transformers.TFAutoModelForQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: TFAlbertForQuestionAnswering (ALBERT model) BertConfig configuration class: TFBertForQuestionAnswering (BERT model) CamembertConfig configuration class: TFCamembertForQuestionAnswering (CamemBERT model) ConvBertConfig configuration class: TFConvBertForQuestionAnswering (ConvBERT model) DebertaConfig configuration class: TFDebertaForQuestionAnswering (DeBERTa model) DebertaV2Config configuration class: TFDebertaV2ForQuestionAnswering (DeBERTa-v2 model) DistilBertConfig configuration class: TFDistilBertForQuestionAnswering (DistilBERT model) ElectraConfig configuration class: TFElectraForQuestionAnswering (ELECTRA model) FlaubertConfig configuration class: TFFlaubertForQuestionAnsweringSimple (FlauBERT model) FunnelConfig configuration class: TFFunnelForQuestionAnswering (Funnel Transformer model) GPTJConfig configuration class: TFGPTJForQuestionAnswering (GPT-J model) LayoutLMv3Config configuration class: TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) LongformerConfig configuration class: TFLongformerForQuestionAnswering (Longformer model) MPNetConfig configuration class: TFMPNetForQuestionAnswering (MPNet model) MobileBertConfig configuration class: TFMobileBertForQuestionAnswering (MobileBERT model) RemBertConfig configuration class: TFRemBertForQuestionAnswering (RemBERT model) RoFormerConfig configuration class: TFRoFormerForQuestionAnswering (RoFormer model) RobertaConfig configuration class: TFRobertaForQuestionAnswering (RoBERTa model) RobertaPreLayerNormConfig configuration class: TFRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) XLMConfig configuration class: TFXLMForQuestionAnsweringSimple (XLM model) XLMRobertaConfig configuration class: TFXLMRobertaForQuestionAnswering (XLM-RoBERTa model) XLNetConfig configuration class: TFXLNetForQuestionAnsweringSimple (XLNet model) Instantiates one of the model classes of the library (with a question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — TFAlbertForQuestionAnswering (ALBERT model) bert — TFBertForQuestionAnswering (BERT model) camembert — TFCamembertForQuestionAnswering (CamemBERT model) convbert — TFConvBertForQuestionAnswering (ConvBERT model) deberta — TFDebertaForQuestionAnswering (DeBERTa model) deberta-v2 — TFDebertaV2ForQuestionAnswering (DeBERTa-v2 model) distilbert — TFDistilBertForQuestionAnswering (DistilBERT model) electra — TFElectraForQuestionAnswering (ELECTRA model) flaubert — TFFlaubertForQuestionAnsweringSimple (FlauBERT model) funnel — TFFunnelForQuestionAnswering (Funnel Transformer model) gptj — TFGPTJForQuestionAnswering (GPT-J model) layoutlmv3 — TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) longformer — TFLongformerForQuestionAnswering (Longformer model) mobilebert — TFMobileBertForQuestionAnswering (MobileBERT model) mpnet — TFMPNetForQuestionAnswering (MPNet model) rembert — TFRemBertForQuestionAnswering (RemBERT model) roberta — TFRobertaForQuestionAnswering (RoBERTa model) roberta-prelayernorm — TFRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) roformer — TFRoFormerForQuestionAnswering (RoFormer model) xlm — TFXLMForQuestionAnsweringSimple (XLM model) xlm-roberta — TFXLMRobertaForQuestionAnswering (XLM-RoBERTa model) xlnet — TFXLNetForQuestionAnsweringSimple (XLNet model) Examples: >>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering >>> >>> model = TFAutoModelForQuestionAnswering.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForQuestionAnswering.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForQuestionAnswering.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForQuestionAnswering class transformers.FlaxAutoModelForQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlbertConfig configuration class: FlaxAlbertForQuestionAnswering (ALBERT model) BartConfig configuration class: FlaxBartForQuestionAnswering (BART model) BertConfig configuration class: FlaxBertForQuestionAnswering (BERT model) BigBirdConfig configuration class: FlaxBigBirdForQuestionAnswering (BigBird model) DistilBertConfig configuration class: FlaxDistilBertForQuestionAnswering (DistilBERT model) ElectraConfig configuration class: FlaxElectraForQuestionAnswering (ELECTRA model) MBartConfig configuration class: FlaxMBartForQuestionAnswering (mBART model) RoFormerConfig configuration class: FlaxRoFormerForQuestionAnswering (RoFormer model) RobertaConfig configuration class: FlaxRobertaForQuestionAnswering (RoBERTa model) RobertaPreLayerNormConfig configuration class: FlaxRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) XLMRobertaConfig configuration class: FlaxXLMRobertaForQuestionAnswering (XLM-RoBERTa model) Instantiates one of the model classes of the library (with a question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: albert — FlaxAlbertForQuestionAnswering (ALBERT model) bart — FlaxBartForQuestionAnswering (BART model) bert — FlaxBertForQuestionAnswering (BERT model) big_bird — FlaxBigBirdForQuestionAnswering (BigBird model) distilbert — FlaxDistilBertForQuestionAnswering (DistilBERT model) electra — FlaxElectraForQuestionAnswering (ELECTRA model) mbart — FlaxMBartForQuestionAnswering (mBART model) roberta — FlaxRobertaForQuestionAnswering (RoBERTa model) roberta-prelayernorm — FlaxRobertaPreLayerNormForQuestionAnswering (RoBERTa-PreLayerNorm model) roformer — FlaxRoFormerForQuestionAnswering (RoFormer model) xlm-roberta — FlaxXLMRobertaForQuestionAnswering (XLM-RoBERTa model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering >>> >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForQuestionAnswering.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForTextEncoding class transformers.AutoModelForTextEncoding < source > ( *args **kwargs ) TFAutoModelForTextEncoding class transformers.TFAutoModelForTextEncoding < source > ( *args **kwargs ) Computer vision The following auto classes are available for the following computer vision tasks. AutoModelForDepthEstimation class transformers.AutoModelForDepthEstimation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a depth estimation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: DPTConfig configuration class: DPTForDepthEstimation (DPT model) GLPNConfig configuration class: GLPNForDepthEstimation (GLPN model) Instantiates one of the model classes of the library (with a depth estimation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForDepthEstimation >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForDepthEstimation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a depth estimation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: dpt — DPTForDepthEstimation (DPT model) glpn — GLPNForDepthEstimation (GLPN model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForDepthEstimation >>> >>> model = AutoModelForDepthEstimation.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForDepthEstimation.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForDepthEstimation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForImageClassification class transformers.AutoModelForImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BeitConfig configuration class: BeitForImageClassification (BEiT model) BitConfig configuration class: BitForImageClassification (BiT model) ConvNextConfig configuration class: ConvNextForImageClassification (ConvNeXT model) ConvNextV2Config configuration class: ConvNextV2ForImageClassification (ConvNeXTV2 model) CvtConfig configuration class: CvtForImageClassification (CvT model) Data2VecVisionConfig configuration class: Data2VecVisionForImageClassification (Data2VecVision model) DeiTConfig configuration class: DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model) DinatConfig configuration class: DinatForImageClassification (DiNAT model) Dinov2Config configuration class: Dinov2ForImageClassification (DINOv2 model) EfficientFormerConfig configuration class: EfficientFormerForImageClassification or EfficientFormerForImageClassificationWithTeacher (EfficientFormer model) EfficientNetConfig configuration class: EfficientNetForImageClassification (EfficientNet model) FocalNetConfig configuration class: FocalNetForImageClassification (FocalNet model) ImageGPTConfig configuration class: ImageGPTForImageClassification (ImageGPT model) LevitConfig configuration class: LevitForImageClassification or LevitForImageClassificationWithTeacher (LeViT model) MobileNetV1Config configuration class: MobileNetV1ForImageClassification (MobileNetV1 model) MobileNetV2Config configuration class: MobileNetV2ForImageClassification (MobileNetV2 model) MobileViTConfig configuration class: MobileViTForImageClassification (MobileViT model) MobileViTV2Config configuration class: MobileViTV2ForImageClassification (MobileViTV2 model) NatConfig configuration class: NatForImageClassification (NAT model) PerceiverConfig configuration class: PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model) PoolFormerConfig configuration class: PoolFormerForImageClassification (PoolFormer model) PvtConfig configuration class: PvtForImageClassification (PVT model) RegNetConfig configuration class: RegNetForImageClassification (RegNet model) ResNetConfig configuration class: ResNetForImageClassification (ResNet model) SegformerConfig configuration class: SegformerForImageClassification (SegFormer model) SwiftFormerConfig configuration class: SwiftFormerForImageClassification (SwiftFormer model) SwinConfig configuration class: SwinForImageClassification (Swin Transformer model) Swinv2Config configuration class: Swinv2ForImageClassification (Swin Transformer V2 model) VanConfig configuration class: VanForImageClassification (VAN model) ViTConfig configuration class: ViTForImageClassification (ViT model) ViTHybridConfig configuration class: ViTHybridForImageClassification (ViT Hybrid model) ViTMSNConfig configuration class: ViTMSNForImageClassification (ViTMSN model) Instantiates one of the model classes of the library (with a image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForImageClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: beit — BeitForImageClassification (BEiT model) bit — BitForImageClassification (BiT model) convnext — ConvNextForImageClassification (ConvNeXT model) convnextv2 — ConvNextV2ForImageClassification (ConvNeXTV2 model) cvt — CvtForImageClassification (CvT model) data2vec-vision — Data2VecVisionForImageClassification (Data2VecVision model) deit — DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model) dinat — DinatForImageClassification (DiNAT model) dinov2 — Dinov2ForImageClassification (DINOv2 model) efficientformer — EfficientFormerForImageClassification or EfficientFormerForImageClassificationWithTeacher (EfficientFormer model) efficientnet — EfficientNetForImageClassification (EfficientNet model) focalnet — FocalNetForImageClassification (FocalNet model) imagegpt — ImageGPTForImageClassification (ImageGPT model) levit — LevitForImageClassification or LevitForImageClassificationWithTeacher (LeViT model) mobilenet_v1 — MobileNetV1ForImageClassification (MobileNetV1 model) mobilenet_v2 — MobileNetV2ForImageClassification (MobileNetV2 model) mobilevit — MobileViTForImageClassification (MobileViT model) mobilevitv2 — MobileViTV2ForImageClassification (MobileViTV2 model) nat — NatForImageClassification (NAT model) perceiver — PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model) poolformer — PoolFormerForImageClassification (PoolFormer model) pvt — PvtForImageClassification (PVT model) regnet — RegNetForImageClassification (RegNet model) resnet — ResNetForImageClassification (ResNet model) segformer — SegformerForImageClassification (SegFormer model) swiftformer — SwiftFormerForImageClassification (SwiftFormer model) swin — SwinForImageClassification (Swin Transformer model) swinv2 — Swinv2ForImageClassification (Swin Transformer V2 model) van — VanForImageClassification (VAN model) vit — ViTForImageClassification (ViT model) vit_hybrid — ViTHybridForImageClassification (ViT Hybrid model) vit_msn — ViTMSNForImageClassification (ViTMSN model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForImageClassification >>> >>> model = AutoModelForImageClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForImageClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForImageClassification class transformers.TFAutoModelForImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: ConvNextConfig configuration class: TFConvNextForImageClassification (ConvNeXT model) CvtConfig configuration class: TFCvtForImageClassification (CvT model) Data2VecVisionConfig configuration class: TFData2VecVisionForImageClassification (Data2VecVision model) DeiTConfig configuration class: TFDeiTForImageClassification or TFDeiTForImageClassificationWithTeacher (DeiT model) EfficientFormerConfig configuration class: TFEfficientFormerForImageClassification or TFEfficientFormerForImageClassificationWithTeacher (EfficientFormer model) MobileViTConfig configuration class: TFMobileViTForImageClassification (MobileViT model) RegNetConfig configuration class: TFRegNetForImageClassification (RegNet model) ResNetConfig configuration class: TFResNetForImageClassification (ResNet model) SegformerConfig configuration class: TFSegformerForImageClassification (SegFormer model) SwinConfig configuration class: TFSwinForImageClassification (Swin Transformer model) ViTConfig configuration class: TFViTForImageClassification (ViT model) Instantiates one of the model classes of the library (with a image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForImageClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: convnext — TFConvNextForImageClassification (ConvNeXT model) cvt — TFCvtForImageClassification (CvT model) data2vec-vision — TFData2VecVisionForImageClassification (Data2VecVision model) deit — TFDeiTForImageClassification or TFDeiTForImageClassificationWithTeacher (DeiT model) efficientformer — TFEfficientFormerForImageClassification or TFEfficientFormerForImageClassificationWithTeacher (EfficientFormer model) mobilevit — TFMobileViTForImageClassification (MobileViT model) regnet — TFRegNetForImageClassification (RegNet model) resnet — TFResNetForImageClassification (ResNet model) segformer — TFSegformerForImageClassification (SegFormer model) swin — TFSwinForImageClassification (Swin Transformer model) vit — TFViTForImageClassification (ViT model) Examples: >>> from transformers import AutoConfig, TFAutoModelForImageClassification >>> >>> model = TFAutoModelForImageClassification.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForImageClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForImageClassification class transformers.FlaxAutoModelForImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: beit — FlaxBeitForImageClassification (BEiT model) regnet — FlaxRegNetForImageClassification (RegNet model) resnet — FlaxResNetForImageClassification (ResNet model) vit — FlaxViTForImageClassification (ViT model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForImageClassification >>> >>> model = FlaxAutoModelForImageClassification.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForImageClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForVideoClassification class transformers.AutoModelForVideoClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a video classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: TimesformerConfig configuration class: TimesformerForVideoClassification (TimeSformer model) VideoMAEConfig configuration class: VideoMAEForVideoClassification (VideoMAE model) VivitConfig configuration class: VivitForVideoClassification (ViViT model) Instantiates one of the model classes of the library (with a video classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForVideoClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForVideoClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a video classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: timesformer — TimesformerForVideoClassification (TimeSformer model) videomae — VideoMAEForVideoClassification (VideoMAE model) vivit — VivitForVideoClassification (ViViT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForVideoClassification >>> >>> model = AutoModelForVideoClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForVideoClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForVideoClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForMaskedImageModeling class transformers.AutoModelForMaskedImageModeling < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: deit — DeiTForMaskedImageModeling (DeiT model) focalnet — FocalNetForMaskedImageModeling (FocalNet model) swin — SwinForMaskedImageModeling (Swin Transformer model) swinv2 — Swinv2ForMaskedImageModeling (Swin Transformer V2 model) vit — ViTForMaskedImageModeling (ViT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForMaskedImageModeling >>> >>> model = AutoModelForMaskedImageModeling.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForMaskedImageModeling.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForMaskedImageModeling.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForMaskedImageModeling class transformers.TFAutoModelForMaskedImageModeling < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: DeiTConfig configuration class: TFDeiTForMaskedImageModeling (DeiT model) SwinConfig configuration class: TFSwinForMaskedImageModeling (Swin Transformer model) Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForMaskedImageModeling.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: deit — TFDeiTForMaskedImageModeling (DeiT model) swin — TFSwinForMaskedImageModeling (Swin Transformer model) Examples: >>> from transformers import AutoConfig, TFAutoModelForMaskedImageModeling >>> >>> model = TFAutoModelForMaskedImageModeling.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForMaskedImageModeling.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForMaskedImageModeling.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForObjectDetection class transformers.AutoModelForObjectDetection < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: ConditionalDetrConfig configuration class: ConditionalDetrForObjectDetection (Conditional DETR model) DeformableDetrConfig configuration class: DeformableDetrForObjectDetection (Deformable DETR model) DetaConfig configuration class: DetaForObjectDetection (DETA model) DetrConfig configuration class: DetrForObjectDetection (DETR model) TableTransformerConfig configuration class: TableTransformerForObjectDetection (Table Transformer model) YolosConfig configuration class: YolosForObjectDetection (YOLOS model) Instantiates one of the model classes of the library (with a object detection head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForObjectDetection >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForObjectDetection.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a object detection head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: conditional_detr — ConditionalDetrForObjectDetection (Conditional DETR model) deformable_detr — DeformableDetrForObjectDetection (Deformable DETR model) deta — DetaForObjectDetection (DETA model) detr — DetrForObjectDetection (DETR model) table-transformer — TableTransformerForObjectDetection (Table Transformer model) yolos — YolosForObjectDetection (YOLOS model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForObjectDetection >>> >>> model = AutoModelForObjectDetection.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForObjectDetection.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForObjectDetection.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForImageSegmentation class transformers.AutoModelForImageSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: DetrConfig configuration class: DetrForSegmentation (DETR model) Instantiates one of the model classes of the library (with a image segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForImageSegmentation >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForImageSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: detr — DetrForSegmentation (DETR model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForImageSegmentation >>> >>> model = AutoModelForImageSegmentation.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForImageSegmentation.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForImageSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForImageToImage class transformers.AutoModelForImageToImage < source > ( *args **kwargs ) AutoModelForSemanticSegmentation class transformers.AutoModelForSemanticSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BeitConfig configuration class: BeitForSemanticSegmentation (BEiT model) DPTConfig configuration class: DPTForSemanticSegmentation (DPT model) Data2VecVisionConfig configuration class: Data2VecVisionForSemanticSegmentation (Data2VecVision model) MobileNetV2Config configuration class: MobileNetV2ForSemanticSegmentation (MobileNetV2 model) MobileViTConfig configuration class: MobileViTForSemanticSegmentation (MobileViT model) MobileViTV2Config configuration class: MobileViTV2ForSemanticSegmentation (MobileViTV2 model) SegformerConfig configuration class: SegformerForSemanticSegmentation (SegFormer model) UperNetConfig configuration class: UperNetForSemanticSegmentation (UPerNet model) Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForSemanticSegmentation >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForSemanticSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: beit — BeitForSemanticSegmentation (BEiT model) data2vec-vision — Data2VecVisionForSemanticSegmentation (Data2VecVision model) dpt — DPTForSemanticSegmentation (DPT model) mobilenet_v2 — MobileNetV2ForSemanticSegmentation (MobileNetV2 model) mobilevit — MobileViTForSemanticSegmentation (MobileViT model) mobilevitv2 — MobileViTV2ForSemanticSegmentation (MobileViTV2 model) segformer — SegformerForSemanticSegmentation (SegFormer model) upernet — UperNetForSemanticSegmentation (UPerNet model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForSemanticSegmentation >>> >>> model = AutoModelForSemanticSegmentation.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForSemanticSegmentation.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForSemanticSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForSemanticSegmentation class transformers.TFAutoModelForSemanticSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: data2vec-vision — TFData2VecVisionForSemanticSegmentation (Data2VecVision model) mobilevit — TFMobileViTForSemanticSegmentation (MobileViT model) segformer — TFSegformerForSemanticSegmentation (SegFormer model) Examples: >>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation >>> >>> model = TFAutoModelForSemanticSegmentation.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForSemanticSegmentation.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForSemanticSegmentation.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForInstanceSegmentation class transformers.AutoModelForInstanceSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a instance segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: MaskFormerConfig configuration class: MaskFormerForInstanceSegmentation (MaskFormer model) Instantiates one of the model classes of the library (with a instance segmentation head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForInstanceSegmentation >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForInstanceSegmentation.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a instance segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: maskformer — MaskFormerForInstanceSegmentation (MaskFormer model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForInstanceSegmentation >>> >>> model = AutoModelForInstanceSegmentation.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForInstanceSegmentation.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForInstanceSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForUniversalSegmentation class transformers.AutoModelForUniversalSegmentation < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a universal image segmentation head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a universal image segmentation head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: detr — DetrForSegmentation (DETR model) mask2former — Mask2FormerForUniversalSegmentation (Mask2Former model) maskformer — MaskFormerForInstanceSegmentation (MaskFormer model) oneformer — OneFormerForUniversalSegmentation (OneFormer model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForUniversalSegmentation >>> >>> model = AutoModelForUniversalSegmentation.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForUniversalSegmentation.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForUniversalSegmentation.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForZeroShotImageClassification class transformers.AutoModelForZeroShotImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: AlignConfig configuration class: AlignModel (ALIGN model) AltCLIPConfig configuration class: AltCLIPModel (AltCLIP model) BlipConfig configuration class: BlipModel (BLIP model) CLIPConfig configuration class: CLIPModel (CLIP model) CLIPSegConfig configuration class: CLIPSegModel (CLIPSeg model) ChineseCLIPConfig configuration class: ChineseCLIPModel (Chinese-CLIP model) Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForZeroShotImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: align — AlignModel (ALIGN model) altclip — AltCLIPModel (AltCLIP model) blip — BlipModel (BLIP model) chinese_clip — ChineseCLIPModel (Chinese-CLIP model) clip — CLIPModel (CLIP model) clipseg — CLIPSegModel (CLIPSeg model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForZeroShotImageClassification >>> >>> model = AutoModelForZeroShotImageClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForZeroShotImageClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForZeroShotImageClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForZeroShotImageClassification class transformers.TFAutoModelForZeroShotImageClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot image classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BlipConfig configuration class: TFBlipModel (BLIP model) CLIPConfig configuration class: TFCLIPModel (CLIP model) Instantiates one of the model classes of the library (with a zero-shot image classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForZeroShotImageClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a zero-shot image classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: blip — TFBlipModel (BLIP model) clip — TFCLIPModel (CLIP model) Examples: >>> from transformers import AutoConfig, TFAutoModelForZeroShotImageClassification >>> >>> model = TFAutoModelForZeroShotImageClassification.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForZeroShotImageClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForZeroShotImageClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForZeroShotObjectDetection class transformers.AutoModelForZeroShotObjectDetection < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot object detection head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: OwlViTConfig configuration class: OwlViTForObjectDetection (OWL-ViT model) Instantiates one of the model classes of the library (with a zero-shot object detection head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForZeroShotObjectDetection.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a zero-shot object detection head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: owlvit — OwlViTForObjectDetection (OWL-ViT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection >>> >>> model = AutoModelForZeroShotObjectDetection.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForZeroShotObjectDetection.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForZeroShotObjectDetection.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) Audio The following auto classes are available for the following audio tasks. AutoModelForAudioClassification class transformers.AutoModelForAudioClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: ASTConfig configuration class: ASTForAudioClassification (Audio Spectrogram Transformer model) Data2VecAudioConfig configuration class: Data2VecAudioForSequenceClassification (Data2VecAudio model) HubertConfig configuration class: HubertForSequenceClassification (Hubert model) SEWConfig configuration class: SEWForSequenceClassification (SEW model) SEWDConfig configuration class: SEWDForSequenceClassification (SEW-D model) UniSpeechConfig configuration class: UniSpeechForSequenceClassification (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatForSequenceClassification (UniSpeechSat model) Wav2Vec2Config configuration class: Wav2Vec2ForSequenceClassification (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForSequenceClassification (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForSequenceClassification (WavLM model) WhisperConfig configuration class: WhisperForAudioClassification (Whisper model) Instantiates one of the model classes of the library (with a audio classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForAudioClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForAudioClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: audio-spectrogram-transformer — ASTForAudioClassification (Audio Spectrogram Transformer model) data2vec-audio — Data2VecAudioForSequenceClassification (Data2VecAudio model) hubert — HubertForSequenceClassification (Hubert model) sew — SEWForSequenceClassification (SEW model) sew-d — SEWDForSequenceClassification (SEW-D model) unispeech — UniSpeechForSequenceClassification (UniSpeech model) unispeech-sat — UniSpeechSatForSequenceClassification (UniSpeechSat model) wav2vec2 — Wav2Vec2ForSequenceClassification (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerForSequenceClassification (Wav2Vec2-Conformer model) wavlm — WavLMForSequenceClassification (WavLM model) whisper — WhisperForAudioClassification (Whisper model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForAudioClassification >>> >>> model = AutoModelForAudioClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForAudioClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForAudioClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForAudioFrameClassification class transformers.TFAutoModelForAudioClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Wav2Vec2Config configuration class: TFWav2Vec2ForSequenceClassification (Wav2Vec2 model) Instantiates one of the model classes of the library (with a audio classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForAudioClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForAudioClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: wav2vec2 — TFWav2Vec2ForSequenceClassification (Wav2Vec2 model) Examples: >>> from transformers import AutoConfig, TFAutoModelForAudioClassification >>> >>> model = TFAutoModelForAudioClassification.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForAudioClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForAudioClassification.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) TFAutoModelForAudioFrameClassification class transformers.AutoModelForAudioFrameClassification < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio frame (token) classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Data2VecAudioConfig configuration class: Data2VecAudioForAudioFrameClassification (Data2VecAudio model) UniSpeechSatConfig configuration class: UniSpeechSatForAudioFrameClassification (UniSpeechSat model) Wav2Vec2Config configuration class: Wav2Vec2ForAudioFrameClassification (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForAudioFrameClassification (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForAudioFrameClassification (WavLM model) Instantiates one of the model classes of the library (with a audio frame (token) classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForAudioFrameClassification >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForAudioFrameClassification.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio frame (token) classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: data2vec-audio — Data2VecAudioForAudioFrameClassification (Data2VecAudio model) unispeech-sat — UniSpeechSatForAudioFrameClassification (UniSpeechSat model) wav2vec2 — Wav2Vec2ForAudioFrameClassification (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerForAudioFrameClassification (Wav2Vec2-Conformer model) wavlm — WavLMForAudioFrameClassification (WavLM model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForAudioFrameClassification >>> >>> model = AutoModelForAudioFrameClassification.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForAudioFrameClassification.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForAudioFrameClassification.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForCTC class transformers.AutoModelForCTC < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Data2VecAudioConfig configuration class: Data2VecAudioForCTC (Data2VecAudio model) HubertConfig configuration class: HubertForCTC (Hubert model) MCTCTConfig configuration class: MCTCTForCTC (M-CTC-T model) SEWConfig configuration class: SEWForCTC (SEW model) SEWDConfig configuration class: SEWDForCTC (SEW-D model) UniSpeechConfig configuration class: UniSpeechForCTC (UniSpeech model) UniSpeechSatConfig configuration class: UniSpeechSatForCTC (UniSpeechSat model) Wav2Vec2Config configuration class: Wav2Vec2ForCTC (Wav2Vec2 model) Wav2Vec2ConformerConfig configuration class: Wav2Vec2ConformerForCTC (Wav2Vec2-Conformer model) WavLMConfig configuration class: WavLMForCTC (WavLM model) Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForCTC >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForCTC.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: data2vec-audio — Data2VecAudioForCTC (Data2VecAudio model) hubert — HubertForCTC (Hubert model) mctct — MCTCTForCTC (M-CTC-T model) sew — SEWForCTC (SEW model) sew-d — SEWDForCTC (SEW-D model) unispeech — UniSpeechForCTC (UniSpeech model) unispeech-sat — UniSpeechSatForCTC (UniSpeechSat model) wav2vec2 — Wav2Vec2ForCTC (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerForCTC (Wav2Vec2-Conformer model) wavlm — WavLMForCTC (WavLM model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForCTC >>> >>> model = AutoModelForCTC.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForCTC.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForCTC.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForSpeechSeq2Seq class transformers.AutoModelForSpeechSeq2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Pop2PianoConfig configuration class: Pop2PianoForConditionalGeneration (Pop2Piano model) Speech2TextConfig configuration class: Speech2TextForConditionalGeneration (Speech2Text model) SpeechEncoderDecoderConfig configuration class: SpeechEncoderDecoderModel (Speech Encoder decoder model) SpeechT5Config configuration class: SpeechT5ForSpeechToText (SpeechT5 model) WhisperConfig configuration class: WhisperForConditionalGeneration (Whisper model) Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForSpeechSeq2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: pop2piano — Pop2PianoForConditionalGeneration (Pop2Piano model) speech-encoder-decoder — SpeechEncoderDecoderModel (Speech Encoder decoder model) speech_to_text — Speech2TextForConditionalGeneration (Speech2Text model) speecht5 — SpeechT5ForSpeechToText (SpeechT5 model) whisper — WhisperForConditionalGeneration (Whisper model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq >>> >>> model = AutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForSpeechSeq2Seq.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForSpeechSeq2Seq class transformers.TFAutoModelForSpeechSeq2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Speech2TextConfig configuration class: TFSpeech2TextForConditionalGeneration (Speech2Text model) WhisperConfig configuration class: TFWhisperForConditionalGeneration (Whisper model) Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForSpeechSeq2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: speech_to_text — TFSpeech2TextForConditionalGeneration (Speech2Text model) whisper — TFWhisperForConditionalGeneration (Whisper model) Examples: >>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq >>> >>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForSpeechSeq2Seq class transformers.FlaxAutoModelForSpeechSeq2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: SpeechEncoderDecoderConfig configuration class: FlaxSpeechEncoderDecoderModel (Speech Encoder decoder model) WhisperConfig configuration class: FlaxWhisperForConditionalGeneration (Whisper model) Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForSpeechSeq2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: speech-encoder-decoder — FlaxSpeechEncoderDecoderModel (Speech Encoder decoder model) whisper — FlaxWhisperForConditionalGeneration (Whisper model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForSpeechSeq2Seq >>> >>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForSpeechSeq2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForAudioXVector class transformers.AutoModelForAudioXVector < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a audio retrieval via x-vector head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a audio retrieval via x-vector head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: data2vec-audio — Data2VecAudioForXVector (Data2VecAudio model) unispeech-sat — UniSpeechSatForXVector (UniSpeechSat model) wav2vec2 — Wav2Vec2ForXVector (Wav2Vec2 model) wav2vec2-conformer — Wav2Vec2ConformerForXVector (Wav2Vec2-Conformer model) wavlm — WavLMForXVector (WavLM model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForAudioXVector >>> >>> model = AutoModelForAudioXVector.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForAudioXVector.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForAudioXVector.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForTextToSpectrogram class transformers.AutoModelForTextToSpectrogram < source > ( *args **kwargs ) AutoModelForTextToWaveform class transformers.AutoModelForTextToWaveform < source > ( *args **kwargs ) Multimodal The following auto classes are available for the following multimodal tasks. AutoModelForTableQuestionAnswering class transformers.AutoModelForTableQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: TapasConfig configuration class: TapasForQuestionAnswering (TAPAS model) Instantiates one of the model classes of the library (with a table question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("google/tapas-base-finetuned-wtq") >>> model = AutoModelForTableQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: tapas — TapasForQuestionAnswering (TAPAS model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering >>> >>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq") >>> >>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/tapas_tf_model_config.json") >>> model = AutoModelForTableQuestionAnswering.from_pretrained( ... "./tf_model/tapas_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForTableQuestionAnswering class transformers.TFAutoModelForTableQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: TapasConfig configuration class: TFTapasForQuestionAnswering (TAPAS model) Instantiates one of the model classes of the library (with a table question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("google/tapas-base-finetuned-wtq") >>> model = TFAutoModelForTableQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: tapas — TFTapasForQuestionAnswering (TAPAS model) Examples: >>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering >>> >>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq") >>> >>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/tapas_pt_model_config.json") >>> model = TFAutoModelForTableQuestionAnswering.from_pretrained( ... "./pt_model/tapas_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForDocumentQuestionAnswering class transformers.AutoModelForDocumentQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: LayoutLMConfig configuration class: LayoutLMForQuestionAnswering (LayoutLM model) LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) LayoutLMv3Config configuration class: LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) Instantiates one of the model classes of the library (with a document question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3") >>> model = AutoModelForDocumentQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: layoutlm — LayoutLMForQuestionAnswering (LayoutLM model) layoutlmv2 — LayoutLMv2ForQuestionAnswering (LayoutLMv2 model) layoutlmv3 — LayoutLMv3ForQuestionAnswering (LayoutLMv3 model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering >>> >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3") >>> >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/layoutlm_tf_model_config.json") >>> model = AutoModelForDocumentQuestionAnswering.from_pretrained( ... "./tf_model/layoutlm_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForDocumentQuestionAnswering class transformers.TFAutoModelForDocumentQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: LayoutLMConfig configuration class: TFLayoutLMForQuestionAnswering (LayoutLM model) LayoutLMv3Config configuration class: TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) Instantiates one of the model classes of the library (with a document question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3") >>> model = TFAutoModelForDocumentQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: layoutlm — TFLayoutLMForQuestionAnswering (LayoutLM model) layoutlmv3 — TFLayoutLMv3ForQuestionAnswering (LayoutLMv3 model) Examples: >>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering >>> >>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3") >>> >>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/layoutlm_pt_model_config.json") >>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained( ... "./pt_model/layoutlm_pytorch_model.bin", from_pt=True, config=config ... ) AutoModelForVisualQuestionAnswering class transformers.AutoModelForVisualQuestionAnswering < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a visual question answering head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Blip2Config configuration class: Blip2ForConditionalGeneration (BLIP-2 model) ViltConfig configuration class: ViltForQuestionAnswering (ViLT model) Instantiates one of the model classes of the library (with a visual question answering head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering >>> >>> config = AutoConfig.from_pretrained("dandelin/vilt-b32-finetuned-vqa") >>> model = AutoModelForVisualQuestionAnswering.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a visual question answering head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: blip-2 — Blip2ForConditionalGeneration (BLIP-2 model) vilt — ViltForQuestionAnswering (ViLT model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering >>> >>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa") >>> >>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/vilt_tf_model_config.json") >>> model = AutoModelForVisualQuestionAnswering.from_pretrained( ... "./tf_model/vilt_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) AutoModelForVision2Seq class transformers.AutoModelForVision2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: Blip2Config configuration class: Blip2ForConditionalGeneration (BLIP-2 model) BlipConfig configuration class: BlipForConditionalGeneration (BLIP model) GitConfig configuration class: GitForCausalLM (GIT model) InstructBlipConfig configuration class: InstructBlipForConditionalGeneration (InstructBLIP model) Pix2StructConfig configuration class: Pix2StructForConditionalGeneration (Pix2Struct model) VisionEncoderDecoderConfig configuration class: VisionEncoderDecoderModel (Vision Encoder decoder model) Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, AutoModelForVision2Seq >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = AutoModelForVision2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a tensorflow index checkpoint file (e.g, ./tf_model/model.ckpt.index). In this case, from_tf should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. state_dict (Dict[str, torch.Tensor], optional) — A state dictionary to use instead of a state dictionary loaded from saved weights file. This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_tf (bool, optional, defaults to False) — Load the model weights from a TensorFlow checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: blip — BlipForConditionalGeneration (BLIP model) blip-2 — Blip2ForConditionalGeneration (BLIP-2 model) git — GitForCausalLM (GIT model) instructblip — InstructBlipForConditionalGeneration (InstructBLIP model) pix2struct — Pix2StructForConditionalGeneration (Pix2Struct model) vision-encoder-decoder — VisionEncoderDecoderModel (Vision Encoder decoder model) The model is set in evaluation mode by default using model.eval() (so for instance, dropout modules are deactivated). To train the model, you should first set it back in training mode with model.train() Examples: >>> from transformers import AutoConfig, AutoModelForVision2Seq >>> >>> model = AutoModelForVision2Seq.from_pretrained("bert-base-cased") >>> >>> model = AutoModelForVision2Seq.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json") >>> model = AutoModelForVision2Seq.from_pretrained( ... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config ... ) TFAutoModelForVision2Seq class transformers.TFAutoModelForVision2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: BlipConfig configuration class: TFBlipForConditionalGeneration (BLIP model) VisionEncoderDecoderConfig configuration class: TFVisionEncoderDecoderModel (Vision Encoder decoder model) Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, TFAutoModelForVision2Seq >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = TFAutoModelForVision2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: blip — TFBlipForConditionalGeneration (BLIP model) vision-encoder-decoder — TFVisionEncoderDecoderModel (Vision Encoder decoder model) Examples: >>> from transformers import AutoConfig, TFAutoModelForVision2Seq >>> >>> model = TFAutoModelForVision2Seq.from_pretrained("bert-base-cased") >>> >>> model = TFAutoModelForVision2Seq.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = TFAutoModelForVision2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... ) FlaxAutoModelForVision2Seq class transformers.FlaxAutoModelForVision2Seq < source > ( *args **kwargs ) This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method. This class cannot be instantiated directly using __init__() (throws an error). from_config < source > ( **kwargs ) Parameters config (PretrainedConfig) — The model class to instantiate is selected based on the configuration class: VisionEncoderDecoderConfig configuration class: FlaxVisionEncoderDecoderModel (Vision Encoder decoder model) Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration. Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights. Examples: >>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq >>> >>> config = AutoConfig.from_pretrained("bert-base-cased") >>> model = FlaxAutoModelForVision2Seq.from_config(config) from_pretrained < source > ( *model_args **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — Can be either: A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. A path to a directory containing model weights saved using save_pretrained(), e.g., ./my_model_directory/. A path or url to a PyTorch state_dict save file (e.g, ./pt_model/pytorch_model.bin). In this case, from_pt should be set to True and a configuration object should be provided as config argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards. model_args (additional positional arguments, optional) — Will be passed along to the underlying model __init__() method. config (PretrainedConfig, optional) — Configuration for the model to use instead of an automatically loaded configuration. Configuration can be automatically loaded when: The model is a model provided by the library (loaded with the model id string of a pretrained model). The model was saved using save_pretrained() and is reloaded by supplying the save directory. The model is loaded by supplying a local directory as pretrained_model_name_or_path and a configuration JSON file named config.json is found in the directory. cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. from_pt (bool, optional, defaults to False) — Load the model weights from a PyTorch checkpoint save file (see docstring of pretrained_model_name_or_path argument). force_download (bool, optional, defaults to False) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. resume_download (bool, optional, defaults to False) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request. output_loading_info(bool, optional, defaults to False) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. local_files_only(bool, optional, defaults to False) — Whether or not to only look at local files (e.g., not try downloading the model). revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. trust_remote_code (bool, optional, defaults to False) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set to True for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. code_revision (str, optional, defaults to "main") — The specific revision to use for the code on the Hub, if the code leaves in a different repository than the rest of the model. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git. kwargs (additional keyword arguments, optional) — Can be used to update the configuration object (after it being loaded) and initiate the model (e.g., output_attentions=True). Behaves differently depending on whether a config is provided or automatically loaded: If a configuration is provided with config, **kwargs will be directly passed to the underlying model’s __init__ method (we assume all relevant updates to the configuration have already been done) If a configuration is not provided, kwargs will be first passed to the configuration class initialization function (from_pretrained()). Each key of kwargs that corresponds to a configuration attribute will be used to override said attribute with the supplied kwargs value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s __init__ function. Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model. The model class to instantiate is selected based on the model_type property of the config object (either passed as an argument or loaded from pretrained_model_name_or_path if possible), or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path: vision-encoder-decoder — FlaxVisionEncoderDecoderModel (Vision Encoder decoder model) Examples: >>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq >>> >>> model = FlaxAutoModelForVision2Seq.from_pretrained("bert-base-cased") >>> >>> model = FlaxAutoModelForVision2Seq.from_pretrained("bert-base-cased", output_attentions=True) >>> model.config.output_attentions True >>> >>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json") >>> model = FlaxAutoModelForVision2Seq.from_pretrained( ... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config ... )
https://huggingface.co/docs/transformers/model_doc/fnet
FNet Overview The FNet model was proposed in FNet: Mixing Tokens with Fourier Transforms by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. The model replaces the self-attention layer in a BERT model with a fourier transform which returns only the real parts of the transform. The model is significantly faster than the BERT model because it has fewer parameters and is more memory efficient. The model achieves about 92-97% accuracy of BERT counterparts on GLUE benchmark, and trains much faster than the BERT model. The abstract from the paper is the following: We show that Transformer encoder architectures can be sped up, with limited accuracy costs, by replacing the self-attention sublayers with simple linear transformations that “mix” input tokens. These linear mixers, along with standard nonlinearities in feed-forward layers, prove competent at modeling semantic relationships in several text classification tasks. Most surprisingly, we find that replacing the self-attention sublayer in a Transformer encoder with a standard, unparameterized Fourier Transform achieves 92-97% of the accuracy of BERT counterparts on the GLUE benchmark, but trains 80% faster on GPUs and 70% faster on TPUs at standard 512 input lengths. At longer input lengths, our FNet model is significantly faster: when compared to the “efficient” Transformers on the Long Range Arena benchmark, FNet matches the accuracy of the most accurate models, while outpacing the fastest models across all sequence lengths on GPUs (and across relatively shorter lengths on TPUs). Finally, FNet has a light memory footprint and is particularly efficient at smaller model sizes; for a fixed speed and accuracy budget, small FNet models outperform Transformer counterparts. Tips on usage: The model was trained without an attention mask as it is based on Fourier Transform. The model was trained with maximum sequence length 512 which includes pad tokens. Hence, it is highly recommended to use the same maximum sequence length for fine-tuning and inference. This model was contributed by gchhablani. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide FNetConfig class transformers.FNetConfig < source > ( vocab_size = 32000 hidden_size = 768 num_hidden_layers = 12 intermediate_size = 3072 hidden_act = 'gelu_new' hidden_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 4 initializer_range = 0.02 layer_norm_eps = 1e-12 use_tpu_fourier_optimizations = False tpu_short_seq_length = 512 pad_token_id = 3 bos_token_id = 1 eos_token_id = 2 **kwargs ) Parameters vocab_size (int, optional, defaults to 32000) — Vocabulary size of the FNet model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling FNetModel or TFFNetModel. hidden_size (int, optional, defaults to 768) — Dimension of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu_new") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 4) — The vocabulary size of the token_type_ids passed when calling FNetModel or TFFNetModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. use_tpu_fourier_optimizations (bool, optional, defaults to False) — Determines whether to use TPU optimized FFTs. If True, the model will favor axis-wise FFTs transforms. Set to False for GPU/CPU hardware, in which case n-dimensional FFTs are used. tpu_short_seq_length (int, optional, defaults to 512) — The sequence length that is expected by the model when using TPUs. This will be used to initialize the DFT matrix only when use_tpu_fourier_optimizations is set to True and the input sequence is shorter than or equal to 4096 tokens. This is the configuration class to store the configuration of a FNetModel. It is used to instantiate an FNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FNet google/fnet-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FNetConfig, FNetModel >>> >>> configuration = FNetConfig() >>> >>> model = FNetModel(configuration) >>> >>> configuration = model.config FNetTokenizer class transformers.FNetTokenizer < source > ( vocab_file do_lower_case = False remove_space = True keep_accents = True unk_token = '<unk>' sep_token = '[SEP]' pad_token = '<pad>' cls_token = '[CLS]' mask_token = '[MASK]' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing. remove_space (bool, optional, defaults to True) — Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (bool, optional, defaults to True) — Whether or not to keep accents when tokenizing. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set: enable_sampling: Enable subword regularization. nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout. nbest_size = {0,1}: No sampling is performed. nbest_size > 1: samples from the nbest_size results. nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs). Construct an FNet tokenizer. Adapted from AlbertTokenizer. Based on SentencePiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An FNet sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An FNet sequence pair mask has the following format: : 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) FNetTokenizerFast class transformers.FNetTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = False remove_space = True keep_accents = True unk_token = '<unk>' sep_token = '[SEP]' pad_token = '<pad>' cls_token = '[CLS]' mask_token = '[MASK]' **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing. remove_space (bool, optional, defaults to True) — Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (bool, optional, defaults to True) — Whether or not to keep accents when tokenizing. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. Construct a “fast” FNetTokenizer (backed by HuggingFace’s tokenizers library). Adapted from AlbertTokenizerFast. Based on Unigram. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. list of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An FNet sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of ids. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An FNet sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | if token_ids_1 is None, only returns the first portion of the mask (0s). FNetModel class transformers.FNetModel < source > ( config add_pooling_layer = True ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare FNet Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder, following the architecture described in FNet: Mixing Tokens with Fourier Transforms by James Lee-Thorp, Joshua Ainslie, Ilya Eckstein, Santiago Ontanon. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetModel.from_pretrained("google/fnet-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FNetForPreTraining class transformers.FNetForPreTraining < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model with two heads on top as done during the pretraining: a masked language modeling head and a next sentence prediction (classification) head. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None next_sentence_label: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.fnet.modeling_fnet.FNetForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] next_sentence_label (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring) Indices should be in [0, 1]: 0 indicates sequence B is a continuation of sequence A, 1 indicates sequence B is a random sequence. kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. Returns transformers.models.fnet.modeling_fnet.FNetForPreTrainingOutput or tuple(torch.FloatTensor) A transformers.models.fnet.modeling_fnet.FNetForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss as the sum of the masked language modeling loss and the next sequence prediction (classification) loss. prediction_logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). seq_relationship_logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. The FNetForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetForPreTraining >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForPreTraining.from_pretrained("google/fnet-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.prediction_logits >>> seq_relationship_logits = outputs.seq_relationship_logits FNetForMaskedLM class transformers.FNetForMaskedLM < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model with a language modeling head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForMaskedLM.from_pretrained("google/fnet-base") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) FNetForNextSentencePrediction class transformers.FNetForNextSentencePrediction < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model with a next sentence prediction (classification) head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.modeling_outputs.NextSentencePredictorOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair (see input_ids docstring). Indices should be in [0, 1]: 0 indicates sequence B is a continuation of sequence A, 1 indicates sequence B is a random sequence. A transformers.modeling_outputs.NextSentencePredictorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when next_sentence_label is provided) — Next sequence prediction (classification) loss. logits (torch.FloatTensor of shape (batch_size, 2)) — Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetForNextSentencePrediction forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetForNextSentencePrediction >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForNextSentencePrediction.from_pretrained("google/fnet-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> next_sentence = "The sky is blue due to the shorter wavelength of blue light." >>> encoding = tokenizer(prompt, next_sentence, return_tensors="pt") >>> outputs = model(**encoding, labels=torch.LongTensor([1])) >>> logits = outputs.logits >>> assert logits[0, 0] < logits[0, 1] FNetForSequenceClassification class transformers.FNetForSequenceClassification < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, FNetForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForSequenceClassification.from_pretrained("google/fnet-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = FNetForSequenceClassification.from_pretrained("google/fnet-base", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, FNetForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForSequenceClassification.from_pretrained("google/fnet-base", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = FNetForSequenceClassification.from_pretrained( ... "google/fnet-base", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss FNetForMultipleChoice class transformers.FNetForMultipleChoice < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForMultipleChoice.from_pretrained("google/fnet-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits FNetForTokenClassification class transformers.FNetForTokenClassification < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForTokenClassification.from_pretrained("google/fnet-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss FNetForQuestionAnswering class transformers.FNetForQuestionAnswering < source > ( config ) Parameters config (FNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FNet Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FNetForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FNetForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("google/fnet-base") >>> model = FNetForQuestionAnswering.from_pretrained("google/fnet-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss
https://huggingface.co/docs/transformers/model_doc/focalnet
FocalNet Overview The FocalNet model was proposed in Focal Modulation Networks by Jianwei Yang, Chunyuan Li, Xiyang Dai, Lu Yuan, Jianfeng Gao. FocalNets completely replace self-attention (used in models like ViT and Swin) by a focal modulation mechanism for modeling token interactions in vision. The authors claim that FocalNets outperform self-attention based models with similar computational costs on the tasks of image classification, object detection, and segmentation. The abstract from the paper is the following: We propose focal modulation networks (FocalNets in short), where self-attention (SA) is completely replaced by a focal modulation mechanism for modeling token interactions in vision. Focal modulation comprises three components: (i) hierarchical contextualization, implemented using a stack of depth-wise convolutional layers, to encode visual contexts from short to long ranges, (ii) gated aggregation to selectively gather contexts for each query token based on its content, and (iii) element-wise modulation or affine transformation to inject the aggregated context into the query. Extensive experiments show FocalNets outperform the state-of-the-art SA counterparts (e.g., Swin and Focal Transformers) with similar computational costs on the tasks of image classification, object detection, and segmentation. Specifically, FocalNets with tiny and base size achieve 82.3% and 83.9% top-1 accuracy on ImageNet-1K. After pretrained on ImageNet-22K in 224 resolution, it attains 86.5% and 87.3% top-1 accuracy when finetuned with resolution 224 and 384, respectively. When transferred to downstream tasks, FocalNets exhibit clear superiority. For object detection with Mask R-CNN, FocalNet base trained with 1\times outperforms the Swin counterpart by 2.1 points and already surpasses Swin trained with 3\times schedule (49.0 v.s. 48.5). For semantic segmentation with UPerNet, FocalNet base at single-scale outperforms Swin by 2.4, and beats Swin at multi-scale (50.5 v.s. 49.7). Using large FocalNet and Mask2former, we achieve 58.5 mIoU for ADE20K semantic segmentation, and 57.9 PQ for COCO Panoptic Segmentation. Using huge FocalNet and DINO, we achieved 64.3 and 64.4 mAP on COCO minival and test-dev, respectively, establishing new SoTA on top of much larger attention-based models like Swinv2-G and BEIT-3. Tips: One can use the AutoImageProcessor class to prepare images for the model. This model was contributed by nielsr. The original code can be found here. FocalNetConfig class transformers.FocalNetConfig < source > ( image_size = 224 patch_size = 4 num_channels = 3 embed_dim = 96 use_conv_embed = False hidden_sizes = [192, 384, 768, 768] depths = [2, 2, 6, 2] focal_levels = [2, 2, 2, 2] focal_windows = [3, 3, 3, 3] hidden_act = 'gelu' mlp_ratio = 4.0 hidden_dropout_prob = 0.0 drop_path_rate = 0.1 use_layerscale = False layerscale_value = 0.0001 use_post_layernorm = False use_post_layernorm_in_modulation = False normalize_modulator = False initializer_range = 0.02 layer_norm_eps = 1e-05 encoder_stride = 32 out_features = None out_indices = None **kwargs ) Parameters image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 4) — The size (resolution) of each patch in the embeddings layer. num_channels (int, optional, defaults to 3) — The number of input channels. embed_dim (int, optional, defaults to 96) — Dimensionality of patch embedding. use_conv_embed (bool, optional, defaults to False) — Whether to use convolutional embedding. The authors noted that using convolutional embedding usually improve the performance, but it’s not used by default. hidden_sizes (List[int], optional, defaults to [192, 384, 768, 768]) — Dimensionality (hidden size) at each stage. depths (list(int), optional, defaults to [2, 2, 6, 2]) — Depth (number of layers) of each stage in the encoder. focal_levels (list(int), optional, defaults to [2, 2, 2, 2]) — Number of focal levels in each layer of the respective stages in the encoder. focal_windows (list(int), optional, defaults to [3, 3, 3, 3]) — Focal window size in each layer of the respective stages in the encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu", "selu" and "gelu_new" are supported. mlp_ratio (float, optional, defaults to 4.0) — Ratio of MLP hidden dimensionality to embedding dimensionality. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings and encoder. drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate. use_layerscale (bool, optional, defaults to False) — Whether to use layer scale in the encoder. layerscale_value (float, optional, defaults to 1e-4) — The initial value of the layer scale. use_post_layernorm (bool, optional, defaults to False) — Whether to use post layer normalization in the encoder. use_post_layernorm_in_modulation (bool, optional, defaults to False) — Whether to use post layer normalization in the modulation layer. normalize_modulator (bool, optional, defaults to False) — Whether to normalize the modulator. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. encoder_stride (int, optional, defaults to 32) — Factor to increase the spatial resolution by in the decoder head for masked image modeling. out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. This is the configuration class to store the configuration of a FocalNetModel. It is used to instantiate a FocalNet model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the FocalNet microsoft/focalnet-tiny architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import FocalNetConfig, FocalNetModel >>> >>> configuration = FocalNetConfig() >>> >>> model = FocalNetModel(configuration) >>> >>> configuration = model.config FocalNetModel class transformers.FocalNetModel < source > ( config add_pooling_layer = True use_mask_token = False ) Parameters config (FocalNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare FocalNet Model outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or tuple(torch.FloatTensor) A transformers.models.focalnet.modeling_focalnet.FocalNetModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FocalNetConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width). Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions. The FocalNetModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, FocalNetModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-tiny") >>> model = FocalNetModel.from_pretrained("microsoft/focalnet-tiny") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 49, 768] FocalNetForMaskedImageModeling class transformers.FocalNetForMaskedImageModeling < source > ( config ) Parameters config (FocalNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FocalNet Model with a decoder on top for masked image modeling. This follows the same implementation as in SimMIM. Note that we provide a script to pre-train this model on custom data in our examples directory. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or tuple(torch.FloatTensor) A transformers.models.focalnet.modeling_focalnet.FocalNetMaskedImageModelingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FocalNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Masked image modeling (MLM) loss. reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed pixel values. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width). Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions. The FocalNetForMaskedImageModeling forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, FocalNetConfig, FocalNetForMaskedImageModeling >>> import torch >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-base-simmim-window6-192") >>> config = FocalNetConfig() >>> model = FocalNetForMaskedImageModeling(config) >>> num_patches = (model.config.image_size // model.config.patch_size) ** 2 >>> pixel_values = image_processor(images=image, return_tensors="pt").pixel_values >>> >>> bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool() >>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) >>> loss, reconstructed_pixel_values = outputs.loss, outputs.logits >>> list(reconstructed_pixel_values.shape) [1, 3, 192, 192] FocalNetForImageClassification class transformers.FocalNetForImageClassification < source > ( config ) Parameters config (FocalNetConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. FocalNet Model with an image classification head on top (a linear layer on top of the pooled output) e.g. for ImageNet. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or tuple(torch.FloatTensor) A transformers.models.focalnet.modeling_focalnet.FocalNetImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FocalNetConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width). Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions. The FocalNetForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, FocalNetForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/focalnet-tiny") >>> model = FocalNetForImageClassification.from_pretrained("microsoft/focalnet-tiny") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat
https://huggingface.co/docs/transformers/model_doc/glpn
GLPN This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a Github Issue. Overview The GLPN model was proposed in Global-Local Path Networks for Monocular Depth Estimation with Vertical CutDepth by Doyeon Kim, Woonghyun Ga, Pyungwhan Ahn, Donggyu Joo, Sehwan Chun, Junmo Kim. GLPN combines SegFormer’s hierarchical mix-Transformer with a lightweight decoder for monocular depth estimation. The proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. The abstract from the paper is the following: Depth estimation from a single image is an important task that can be applied to various fields in computer vision, and has grown rapidly with the development of convolutional neural networks. In this paper, we propose a novel structure and training strategy for monocular depth estimation to further improve the prediction accuracy of the network. We deploy a hierarchical transformer encoder to capture and convey the global context, and design a lightweight yet powerful decoder to generate an estimated depth map while considering local connectivity. By constructing connected paths between multi-scale local features and the global decoding stream with our proposed selective feature fusion module, the network can integrate both representations and recover fine details. In addition, the proposed decoder shows better performance than the previously proposed decoders, with considerably less computational complexity. Furthermore, we improve the depth-specific augmentation method by utilizing an important observation in depth estimation to enhance the model. Our network achieves state-of-the-art performance over the challenging depth dataset NYU Depth V2. Extensive experiments have been conducted to validate and show the effectiveness of the proposed approach. Finally, our model shows better generalisation ability and robustness than other comparative models. Tips: One can use GLPNImageProcessor to prepare images for the model. Summary of the approach. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GLPN. Demo notebooks for GLPNForDepthEstimation can be found here. Monocular depth estimation task guide GLPNConfig class transformers.GLPNConfig < source > ( num_channels = 3 num_encoder_blocks = 4 depths = [2, 2, 2, 2] sr_ratios = [8, 4, 2, 1] hidden_sizes = [32, 64, 160, 256] patch_sizes = [7, 3, 3, 3] strides = [4, 2, 2, 2] num_attention_heads = [1, 2, 5, 8] mlp_ratios = [4, 4, 4, 4] hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 drop_path_rate = 0.1 layer_norm_eps = 1e-06 decoder_hidden_size = 64 max_depth = 10 head_in_index = -1 **kwargs ) Parameters num_channels (int, optional, defaults to 3) — The number of input channels. num_encoder_blocks (int, optional, defaults to 4) — The number of encoder blocks (i.e. stages in the Mix Transformer encoder). depths (List[int], optional, defaults to [2, 2, 2, 2]) — The number of layers in each encoder block. sr_ratios (List[int], optional, defaults to [8, 4, 2, 1]) — Sequence reduction ratios in each encoder block. hidden_sizes (List[int], optional, defaults to [32, 64, 160, 256]) — Dimension of each of the encoder blocks. patch_sizes (List[int], optional, defaults to [7, 3, 3, 3]) — Patch size before each encoder block. strides (List[int], optional, defaults to [4, 2, 2, 2]) — Stride before each encoder block. num_attention_heads (List[int], optional, defaults to [1, 2, 4, 8]) — Number of attention heads for each attention layer in each block of the Transformer encoder. mlp_ratios (List[int], optional, defaults to [4, 4, 4, 4]) — Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the encoder blocks. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. drop_path_rate (float, optional, defaults to 0.1) — The dropout probability for stochastic depth, used in the blocks of the Transformer encoder. layer_norm_eps (float, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers. decoder_hidden_size (int, optional, defaults to 32) — The dimension of the decoder. max_depth (int, optional, defaults to 10) — The maximum depth of the decoder. head_in_index (int, optional, defaults to -1) — The index of the features to use in the head. This is the configuration class to store the configuration of a GLPNModel. It is used to instantiate an GLPN model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GLPN vinvino02/glpn-kitti architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GLPNModel, GLPNConfig >>> >>> configuration = GLPNConfig() >>> >>> model = GLPNModel(configuration) >>> >>> configuration = model.config GLPNFeatureExtractor Preprocess an image or a batch of images. GLPNImageProcessor class transformers.GLPNImageProcessor < source > ( do_resize: bool = True size_divisor: int = 32 resample = <Resampling.BILINEAR: 2> do_rescale: bool = True **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions, rounding them down to the closest multiple of size_divisor. Can be overridden by do_resize in preprocess. size_divisor (int, optional, defaults to 32) — When do_resize is True, images are resized so their height and width are rounded down to the closest multiple of size_divisor. Can be overridden by size_divisor in preprocess. resample (PIL.Image resampling filter, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. Can be overridden by resample in preprocess. do_rescale (bool, optional, defaults to True) — Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.). Can be overridden by do_rescale in preprocess. Constructs a GLPN image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), transformers.utils.generic.TensorType, typing.List[ForwardRef('PIL.Image.Image')], typing.List[transformers.utils.generic.TensorType]] do_resize: typing.Optional[bool] = None size_divisor: typing.Optional[int] = None resample = None do_rescale: typing.Optional[bool] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (PIL.Image.Image or TensorType or List[np.ndarray] or List[TensorType]) — Images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_normalize=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the input such that the (height, width) dimensions are a multiple of size_divisor. size_divisor (int, optional, defaults to self.size_divisor) — When do_resize is True, images are resized so their height and width are rounded down to the closest multiple of size_divisor. resample (PIL.Image resampling filter, optional, defaults to self.resample) — PIL.Image resampling filter to use if resizing the image e.g. PILImageResampling.BILINEAR. Only has an effect if do_resize is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether or not to apply the scaling factor (to make pixel values floats between 0. and 1.). return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: None: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: ChannelDimension.FIRST: image in (num_channels, height, width) format. ChannelDimension.LAST: image in (height, width, num_channels) format. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess the given images. GLPNModel class transformers.GLPNModel < source > ( config ) Parameters config (GLPNConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GLPN encoder (Mix-Transformer) outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See GLPNImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GLPNConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GLPNModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, GLPNModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("vinvino02/glpn-kitti") >>> model = GLPNModel.from_pretrained("vinvino02/glpn-kitti") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 512, 15, 20] GLPNForDepthEstimation class transformers.GLPNForDepthEstimation < source > ( config ) Parameters config (GLPNConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GLPN Model transformer with a lightweight depth estimation head on top e.g. for KITTI, NYUv2. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor labels: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.DepthEstimatorOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See GLPNImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.FloatTensor of shape (batch_size, height, width), optional) — Ground truth depth estimation maps for computing the loss. A transformers.modeling_outputs.DepthEstimatorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GLPNConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. predicted_depth (torch.FloatTensor of shape (batch_size, height, width)) — Predicted depth for each pixel. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GLPNForDepthEstimation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, GLPNForDepthEstimation >>> import torch >>> import numpy as np >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("vinvino02/glpn-kitti") >>> model = GLPNForDepthEstimation.from_pretrained("vinvino02/glpn-kitti") >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) ... predicted_depth = outputs.predicted_depth >>> >>> prediction = torch.nn.functional.interpolate( ... predicted_depth.unsqueeze(1), ... size=image.size[::-1], ... mode="bicubic", ... align_corners=False, ... ) >>> >>> output = prediction.squeeze().cpu().numpy() >>> formatted = (output * 255 / np.max(output)).astype("uint8") >>> depth = Image.fromarray(formatted)
https://huggingface.co/docs/transformers/model_doc/openai-gpt
OpenAI GPT Overview OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It’s a causal (unidirectional) transformer pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus. The abstract from the paper is the following: Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. Tips: GPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script. Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT is one of them. This model was contributed by thomwolf. The original code can be found here. Note: If you want to reproduce the original tokenization process of the OpenAI GPT paper, you will need to install ftfy and SpaCy: pip install spacy ftfy==4.4.3 python -m spacy download en If you don’t install ftfy and SpaCy, the OpenAIGPTTokenizer will default to tokenize using BERT’s BasicTokenizer followed by Byte-Pair Encoding (which should be fine for most usage, don’t worry). Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OpenAI GPT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Text Classification A blog post on outperforming OpenAI GPT-3 with SetFit for text-classification. See also: Text classification task guide Text Generation A blog on how to Finetune a non-English GPT-2 Model with Hugging Face. A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2. A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model. A blog on Faster Text Generation with TensorFlow and XLA with GPT-2. A blog on How to train a Language Model with Megatron-LM with a GPT-2 model. A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎 A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎 Causal language modeling chapter of the 🤗 Hugging Face Course. OpenAIGPTLMHeadModel is supported by this causal language modeling example script, text generation example script and notebook. TFOpenAIGPTLMHeadModel is supported by this causal language modeling example script and notebook. See also: Causal language modeling task guide Token Classification A course material on Byte-Pair Encoding tokenization. OpenAIGPTConfig class transformers.OpenAIGPTConfig < source > ( vocab_size = 40478 n_positions = 512 n_embd = 768 n_layer = 12 n_head = 12 afn = 'gelu' resid_pdrop = 0.1 embd_pdrop = 0.1 attn_pdrop = 0.1 layer_norm_epsilon = 1e-05 initializer_range = 0.02 summary_type = 'cls_index' summary_use_proj = True summary_activation = None summary_proj_to_labels = True summary_first_dropout = 0.1 **kwargs ) Parameters vocab_size (int, optional, defaults to 40478) — Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling OpenAIGPTModel or TFOpenAIGPTModel. n_positions (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (int, optional, defaults to 768) — Dimensionality of the embeddings and hidden states. n_layer (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. afn (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (int, optional, defaults to 0.1) — The dropout ratio for the embeddings. attn_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the attention. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. summary_type (str, optional, defaults to "cls_index") — Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and OpenAIGPTDoubleHeadsModel. Has to be one of the following options: "last": Take the last token hidden state (like XLNet). "first": Take the first token hidden state (like BERT). "mean": Take the mean of all tokens hidden states. "cls_index": Supply a Tensor of classification token position (like GPT/GPT-2). "attn": Not implemented now, use multi-head attention. summary_use_proj (bool, optional, defaults to True) — Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and OpenAIGPTDoubleHeadsModel. Whether or not to add a projection after the vector extraction. summary_activation (str, optional) — Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and OpenAIGPTDoubleHeadsModel. Pass "tanh" for a tanh activation to the output, any other value will result in no activation. summary_proj_to_labels (bool, optional, defaults to True) — Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and OpenAIGPTDoubleHeadsModel. Whether the projection outputs should have config.num_labels or config.hidden_size classes. summary_first_dropout (float, optional, defaults to 0.1) — Argument used when doing sequence summary, used in the models OpenAIGPTDoubleHeadsModel and OpenAIGPTDoubleHeadsModel. The dropout ratio to be used after the projection and activation. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a OpenAIGPTModel or a TFOpenAIGPTModel. It is used to instantiate a GPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT openai-gpt architecture from OpenAI. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import OpenAIGPTConfig, OpenAIGPTModel >>> >>> configuration = OpenAIGPTConfig() >>> >>> model = OpenAIGPTModel(configuration) >>> >>> configuration = model.config OpenAIGPTTokenizer class transformers.OpenAIGPTTokenizer < source > ( vocab_file merges_file unk_token = '<unk>' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Construct a GPT Tokenizer. Based on Byte-Pair-Encoding with the following peculiarities: lowercases all inputs, uses SpaCy tokenizer and ftfy for pre-BPE tokenization if they are installed, fallback to BERT’s BasicTokenizer if not. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) OpenAIGPTTokenizerFast class transformers.OpenAIGPTTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<unk>' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Construct a “fast” GPT Tokenizer (backed by HuggingFace’s tokenizers library). Based on Byte-Pair-Encoding with the following peculiarities: lower case all inputs uses BERT’s BasicTokenizer for pre-BPE tokenization This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. OpenAI specific outputs class transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None mc_loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None mc_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) — Multiple choice classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. class transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput < source > ( logits: tf.Tensor = None mc_logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (tf.Tensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. OpenAIGPTModel class transformers.OpenAIGPTModel < source > ( config ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The OpenAIGPTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, OpenAIGPTModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = OpenAIGPTModel.from_pretrained("openai-gpt") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state OpenAIGPTLMHeadModel class transformers.OpenAIGPTLMHeadModel < source > ( config ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The OpenAIGPTLMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, OpenAIGPTLMHeadModel >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = OpenAIGPTLMHeadModel.from_pretrained("openai-gpt") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits OpenAIGPTDoubleHeadsModel class transformers.OpenAIGPTDoubleHeadsModel < source > ( config ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None mc_token_ids: typing.Optional[torch.LongTensor] = None labels: typing.Optional[torch.LongTensor] = None mc_labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. mc_token_ids (torch.LongTensor of shape (batch_size, num_choices), optional, default to index of the last token of the input) — Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1]. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-1, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] mc_labels (torch.LongTensor of shape (batch_size), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (see input_ids above) A transformers.models.openai.modeling_openai.OpenAIGPTDoubleHeadsModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) — Multiple choice classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The OpenAIGPTDoubleHeadsModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, OpenAIGPTDoubleHeadsModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = OpenAIGPTDoubleHeadsModel.from_pretrained("openai-gpt") >>> tokenizer.add_special_tokens( ... {"cls_token": "[CLS]"} ... ) >>> model.resize_token_embeddings(len(tokenizer)) >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] >>> input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0) >>> mc_token_ids = torch.tensor([input_ids.size(-1) - 1, input_ids.size(-1) - 1]).unsqueeze(0) >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) >>> lm_logits = outputs.logits >>> mc_logits = outputs.mc_logits OpenAIGPTForSequenceClassification class transformers.OpenAIGPTForSequenceClassification < source > ( config ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Original OpenAI GPT Model transformer with a sequence classification head on top (linear layer). OpenAIGPTForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The OpenAIGPTForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, OpenAIGPTForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = OpenAIGPTForSequenceClassification.from_pretrained("openai-gpt") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = OpenAIGPTForSequenceClassification.from_pretrained("openai-gpt", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, OpenAIGPTForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = OpenAIGPTForSequenceClassification.from_pretrained("openai-gpt", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = OpenAIGPTForSequenceClassification.from_pretrained( ... "openai-gpt", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss TFOpenAIGPTModel class transformers.TFOpenAIGPTModel < source > ( *args **kwargs ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFOpenAIGPTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFOpenAIGPTModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = TFOpenAIGPTModel.from_pretrained("openai-gpt") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFOpenAIGPTLMHeadModel class transformers.TFOpenAIGPTLMHeadModel < source > ( *args **kwargs ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFCausalLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFOpenAIGPTLMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFOpenAIGPTLMHeadModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = TFOpenAIGPTLMHeadModel.from_pretrained("openai-gpt") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits TFOpenAIGPTDoubleHeadsModel class transformers.TFOpenAIGPTDoubleHeadsModel < source > ( *args **kwargs ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None mc_token_ids: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). mc_token_ids (tf.Tensor or Numpy array of shape (batch_size, num_choices), optional, default to index of the last token of the input) — Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1]. A transformers.models.openai.modeling_tf_openai.TFOpenAIGPTDoubleHeadsModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (tf.Tensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFOpenAIGPTDoubleHeadsModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import tensorflow as tf >>> from transformers import AutoTokenizer, TFOpenAIGPTDoubleHeadsModel >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = TFOpenAIGPTDoubleHeadsModel.from_pretrained("openai-gpt") >>> >>> tokenizer.add_special_tokens({"cls_token": "[CLS]"}) >>> model.resize_token_embeddings(len(tokenizer)) >>> print(tokenizer.cls_token_id, len(tokenizer)) >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] >>> encoding = tokenizer(choices, return_tensors="tf") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> inputs["mc_token_ids"] = tf.constant( ... [inputs["input_ids"].shape[-1] - 1, inputs["input_ids"].shape[-1] - 1] ... )[ ... None, : ... ] >>> outputs = model(inputs) >>> lm_prediction_scores, mc_prediction_scores = outputs[:2] TFOpenAIGPTForSequenceClassification class transformers.TFOpenAIGPTForSequenceClassification < source > ( *args **kwargs ) Parameters config (OpenAIGPTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The OpenAI GPT Model transformer with a sequence classification head on top (linear layer). TFOpenAIGPTForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFOpenAIGPTForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFOpenAIGPTForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("openai-gpt") >>> model = TFOpenAIGPTForSequenceClassification.from_pretrained("openai-gpt") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFOpenAIGPTForSequenceClassification.from_pretrained("openai-gpt", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss
https://huggingface.co/docs/transformers/model_doc/git
GIT Overview The GIT model was proposed in GIT: A Generative Image-to-text Transformer for Vision and Language by Jianfeng Wang, Zhengyuan Yang, Xiaowei Hu, Linjie Li, Kevin Lin, Zhe Gan, Zicheng Liu, Ce Liu, Lijuan Wang. GIT is a decoder-only Transformer that leverages CLIP’s vision encoder to condition the model on vision inputs besides text. The model obtains state-of-the-art results on image captioning and visual question answering benchmarks. The abstract from the paper is the following: In this paper, we design and train a Generative Image-to-text Transformer, GIT, to unify vision-language tasks such as image/video captioning and question answering. While generative models provide a consistent network architecture between pre-training and fine-tuning, existing work typically contains complex structures (uni/multi-modal encoder/decoder) and depends on external modules such as object detectors/taggers and optical character recognition (OCR). In GIT, we simplify the architecture as one image encoder and one text decoder under a single language modeling task. We also scale up the pre-training data and the model size to boost the model performance. Without bells and whistles, our GIT establishes new state of the arts on 12 challenging benchmarks with a large margin. For instance, our model surpasses the human performance for the first time on TextCaps (138.2 vs. 125.5 in CIDEr). Furthermore, we present a new scheme of generation-based image classification and scene text recognition, achieving decent performance on standard benchmarks. Tips: GIT is implemented in a very similar way to GPT-2, the only difference being that the model is also conditioned on pixel_values. One can use GitProcessor to prepare images for the model, and the generate method for autoregressive generation. GIT architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GIT. Demo notebooks regarding inference + fine-tuning GIT on custom data can be found here. See also: Causal language modeling task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it. The resource should ideally demonstrate something new instead of duplicating an existing resource. GitVisionConfig class transformers.GitVisionConfig < source > ( hidden_size = 768 intermediate_size = 3072 num_hidden_layers = 12 num_attention_heads = 12 num_channels = 3 image_size = 224 patch_size = 16 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 attention_dropout = 0.0 initializer_range = 0.02 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. This is the configuration class to store the configuration of a GitVisionModel. It is used to instantiate a GIT vision encoder according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the vision encoder of the GIT microsoft/git-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GitVisionConfig, GitVisionModel >>> >>> configuration = GitVisionConfig() >>> >>> model = GitVisionModel(configuration) >>> >>> configuration = model.config GitVisionModel class transformers.GitVisionModel < source > ( config: GitVisionConfig ) Parameters config (GitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The vision model from CLIP, used in GIT, without any head or projection on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.git.configuration_git.GitVisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GitVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, GitVisionModel >>> processor = AutoProcessor.from_pretrained("microsoft/git-base") >>> model = GitVisionModel.from_pretrained("microsoft/git-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state GitConfig class transformers.GitConfig < source > ( vision_config = None vocab_size = 30522 hidden_size = 768 num_hidden_layers = 6 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 1024 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' use_cache = True tie_word_embeddings = False bos_token_id = 101 eos_token_id = 102 num_image_with_embedding = None **kwargs ) Parameters vision_config (dict, optional) — Dictionary of configuration options used to initialize GitVisionConfig. vocab_size (int, optional, defaults to 30522) — Vocabulary size of the GIT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GitModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 6) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). num_image_with_embedding (int, optional) — The number of temporal embeddings to add, in case the model is used for video captioning/VQA. This is the configuration class to store the configuration of a GitModel. It is used to instantiate a GIT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GIT microsoft/git-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import GitConfig, GitModel >>> >>> configuration = GitConfig() >>> >>> model = GitModel(configuration) >>> >>> configuration = model.config GitProcessor class transformers.GitProcessor < source > ( image_processor tokenizer ) Parameters image_processor (AutoImageProcessor) — The image processor is a required input. tokenizer (AutoTokenizer) — The tokenizer is a required input. Constructs a GIT processor which wraps a CLIP image processor and a BERT tokenizer into a single processor. GitProcessor offers all the functionalities of CLIPImageProcessor and BertTokenizerFast. See the call() and decode() for more information. __call__ < source > ( text = None images = None return_tensors = None **kwargs ) → BatchEncoding Parameters text (str, List[str], List[List[str]]) — The sequence or batch of sequences to be encoded. Each sequence can be a string or a list of strings (pretokenized string). If the sequences are provided as list of strings (pretokenized), you must set is_split_into_words=True (to lift the ambiguity with a batch of sequences). images (PIL.Image.Image, np.ndarray, torch.Tensor, List[PIL.Image.Image], List[np.ndarray], List[torch.Tensor]) — The image or batch of images to be prepared. Each image can be a PIL image, NumPy array or PyTorch tensor. In case of a NumPy array/PyTorch tensor, each image should be of shape (C, H, W), where C is a number of channels, H and W are image height and width. return_tensors (str or TensorType, optional) — If set, will return tensors of a particular framework. Acceptable values are: 'tf': Return TensorFlow tf.constant objects. 'pt': Return PyTorch torch.Tensor objects. 'np': Return NumPy np.ndarray objects. 'jax': Return JAX jnp.ndarray objects. A BatchEncoding with the following fields: input_ids — List of token ids to be fed to a model. Returned when text is not None. attention_mask — List of indices specifying which tokens should be attended to by the model (when return_attention_mask=True or if “attention_mask” is in self.model_input_names and if text is not None). pixel_values — Pixel values to be fed to a model. Returned when images is not None. Main method to prepare for the model one or several sequences(s) and image(s). This method forwards the text and kwargs arguments to BertTokenizerFast’s call() if text is not None to encode the text. To prepare the image(s), this method forwards the images and kwrags arguments to CLIPImageProcessor’s call() if images is not None. Please refer to the doctsring of the above two methods for more information. GitModel class transformers.GitModel < source > ( config ) Parameters config (GitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GIT Model transformer consisting of a CLIP image encoder and text decoder outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GitConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GitModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoProcessor, AutoModel >>> import requests >>> from PIL import Image >>> processor = AutoProcessor.from_pretrained("microsoft/git-base") >>> model = AutoModel.from_pretrained("microsoft/git-base") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> text = "this is an image of two cats" >>> inputs = processor(text, images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state GitForCausalLM class transformers.GitForCausalLM < source > ( config ) Parameters config (GitConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GIT Model with a language modeling head on top for autoregressive language modeling. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size] past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GitConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GitForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: Image captioning example: >>> from transformers import AutoProcessor, AutoModelForCausalLM >>> import requests >>> from PIL import Image >>> processor = AutoProcessor.from_pretrained("microsoft/git-base-coco") >>> model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-coco") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> pixel_values = processor(images=image, return_tensors="pt").pixel_values >>> generated_ids = model.generate(pixel_values=pixel_values, max_length=50) >>> generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] >>> print(generated_caption) two cats sleeping on a pink blanket next to remotes. Visual question answering (VQA) example: >>> from transformers import AutoProcessor, AutoModelForCausalLM >>> from huggingface_hub import hf_hub_download >>> from PIL import Image >>> processor = AutoProcessor.from_pretrained("microsoft/git-base-textvqa") >>> model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-textvqa") >>> file_path = hf_hub_download(repo_id="nielsr/textvqa-sample", filename="bus.png", repo_type="dataset") >>> image = Image.open(file_path).convert("RGB") >>> pixel_values = processor(images=image, return_tensors="pt").pixel_values >>> question = "what does the front of the bus say at the top?" >>> input_ids = processor(text=question, add_special_tokens=False).input_ids >>> input_ids = [processor.tokenizer.cls_token_id] + input_ids >>> input_ids = torch.tensor(input_ids).unsqueeze(0) >>> generated_ids = model.generate(pixel_values=pixel_values, input_ids=input_ids, max_length=50) >>> print(processor.batch_decode(generated_ids, skip_special_tokens=True)) ['what does the front of the bus say at the top? special'] Video captioning example: >>> import av >>> import numpy as np >>> from PIL import Image >>> from huggingface_hub import hf_hub_download >>> from transformers import AutoProcessor, AutoModelForCausalLM >>> processor = AutoProcessor.from_pretrained("microsoft/git-base-vatex") >>> model = AutoModelForCausalLM.from_pretrained("microsoft/git-base-vatex") >>> >>> np.random.seed(45) >>> def read_video_pyav(container, indices): ... ''' ... Decode the video with PyAV decoder. ... Args: ... container (`av.container.input.InputContainer`): PyAV container. ... indices (`List[int]`): List of frame indices to decode. ... Returns: ... result (np.ndarray): np array of decoded frames of shape (num_frames, height, width, 3). ... ''' ... frames = [] ... container.seek(0) ... start_index = indices[0] ... end_index = indices[-1] ... for i, frame in enumerate(container.decode(video=0)): ... if i > end_index: ... break ... if i >= start_index and i in indices: ... frames.append(frame) ... return np.stack([x.to_ndarray(format="rgb24") for x in frames]) >>> def sample_frame_indices(clip_len, frame_sample_rate, seg_len): ... ''' ... Sample a given number of frame indices from the video. ... Args: ... clip_len (`int`): Total number of frames to sample. ... frame_sample_rate (`int`): Sample every n-th frame. ... seg_len (`int`): Maximum allowed index of sample's last frame. ... Returns: ... indices (`List[int]`): List of sampled frame indices ... ''' ... converted_len = int(clip_len * frame_sample_rate) ... end_idx = np.random.randint(converted_len, seg_len) ... start_idx = end_idx - converted_len ... indices = np.linspace(start_idx, end_idx, num=clip_len) ... indices = np.clip(indices, start_idx, end_idx - 1).astype(np.int64) ... return indices >>> >>> file_path = hf_hub_download( ... repo_id="nielsr/video-demo", filename="eating_spaghetti.mp4", repo_type="dataset" ... ) >>> container = av.open(file_path) >>> >>> num_frames = model.config.num_image_with_embedding >>> indices = sample_frame_indices( ... clip_len=num_frames, frame_sample_rate=4, seg_len=container.streams.video[0].frames ... ) >>> frames = read_video_pyav(container, indices) >>> pixel_values = processor(images=list(frames), return_tensors="pt").pixel_values >>> generated_ids = model.generate(pixel_values=pixel_values, max_length=50) >>> print("Generated caption:", processor.batch_decode(generated_ids, skip_special_tokens=True)) Generated caption: ['a woman is sitting at a table and she is talking about the food she is holding.']
https://huggingface.co/docs/transformers/model_doc/gpt_neox
GPT-NeoX Overview We introduce GPT-NeoX-20B, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model that has publicly available weights at the time of submission. In this work, we describe GPT-NeoX-20B’s architecture and training and evaluate its performance on a range of language-understanding, mathematics, and knowledge-based tasks. We find that GPT-NeoX-20B is a particularly powerful few-shot reasoner and gains far more in performance when evaluated five-shot than similarly sized GPT-3 and FairSeq models. We open-source the training and evaluation code, as well as the model weights, at https://github.com/EleutherAI/gpt-neox. Development of the model was led by Sid Black, Stella Biderman and Eric Hallahan, and the model was trained with generous the support of CoreWeave. GPT-NeoX-20B was trained with fp16, thus it is recommended to initialize the model as follows: model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b").half().cuda() GPT-NeoX-20B also has a different tokenizer from the one used in GPT-J-6B and GPT-Neo. The new tokenizer allocates additional tokens to whitespace characters, making the model more suitable for certain tasks like code generation. Generation The generate() method can be used to generate text using GPT Neo model. >>> from transformers import GPTNeoXForCausalLM, GPTNeoXTokenizerFast >>> model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b") >>> tokenizer = GPTNeoXTokenizerFast.from_pretrained("EleutherAI/gpt-neox-20b") >>> prompt = "GPTNeoX20B is a 20B-parameter autoregressive Transformer model developed by EleutherAI." >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] Documentation resources Causal language modeling task guide GPTNeoXConfig class transformers.GPTNeoXConfig < source > ( vocab_size = 50432 hidden_size = 6144 num_hidden_layers = 44 num_attention_heads = 64 intermediate_size = 24576 hidden_act = 'gelu' rotary_pct = 0.25 rotary_emb_base = 10000 attention_dropout = 0.0 hidden_dropout = 0.0 classifier_dropout = 0.1 max_position_embeddings = 2048 initializer_range = 0.02 layer_norm_eps = 1e-05 use_cache = True bos_token_id = 0 eos_token_id = 2 tie_word_embeddings = False use_parallel_residual = True rope_scaling = None **kwargs ) Parameters vocab_size (int, optional, defaults to 50432) — Vocabulary size of the GPTNeoX model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPTNeoXModel. hidden_size (int, optional, defaults to 6144) — Dimension of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 44) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 64) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 24576) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. rotary_pct (float, optional, defaults to 0.25) — percentage of hidden dimensions to allocate to rotary embeddings rotary_emb_base (int, optional, defaults to 10000) — base for computing rotary embeddings frequency attention_dropout (float, optional, defaults to 0.0) — The dropout ratio probability of the attention score. hidden_dropout (float, optional, defaults to 0.0) — The dropout ratio of (1) the word embeddings, (2) the post-attention hidden states, and (3) the post-mlp hidden states. classifier_dropout (float, optional, defaults to 0.1) — Argument used when doing token classification, used in the model GPTNeoXForTokenClassification. The dropout ratio for the hidden layer. max_position_embeddings (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). initializer_range (float, optional, defaults to 1e-5) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. use_parallel_residual (bool, optional, defaults to True) — Whether to use a “parallel” formulation in each Transformer layer, which can provide a slight training speedup at large scales (e.g. 20B). rope_scaling (Dict, optional) — Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling strategies: linear and dynamic. Their scaling factor must be an float greater than 1. The expected format is {"type": strategy name, "factor": scaling factor}. When using this flag, don’t update max_position_embeddings to the expected new maximum. See the following thread for more information on how these scaling strategies behave: https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an experimental feature, subject to breaking API changes in future versions. Example — This is the configuration class to store the configuration of a GPTNeoXModel. It is used to instantiate an GPTNeoX model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPTNeoX EleutherAI/gpt-neox-20b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. >>> from transformers import GPTNeoXConfig, GPTNeoXModel >>> >>> configuration = GPTNeoXConfig() >>> >>> model = GPTNeoXModel(configuration) >>> >>> configuration = model.config GPTNeoXTokenizerFast class transformers.GPTNeoXTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<|endoftext|>' bos_token = '<|endoftext|>' eos_token = '<|endoftext|>' add_prefix_space = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (GPTNeoX tokenizer detect beginning of words by the preceding space). trim_offsets (bool, optional, defaults to True) — Whether or not the post-processing step should trim offsets to avoid including whitespaces. Construct a “fast” GPT-NeoX-20B tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import GPTNeoXTokenizerFast >>> tokenizer = GPTNeoXTokenizerFast.from_pretrained("gpt2") >>> tokenizer("Hello world")["input_ids"] [15496, 995] >>> tokenizer(" Hello world")["input_ids"] [18435, 995] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. GPTNeoXModel class transformers.GPTNeoXModel < source > ( config ) Parameters config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPTNeoX Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-neox-20b instead of trl-internal-testing/tiny-random-GPTNeoXForCausalLM. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> from transformers import AutoTokenizer, GPTNeoXModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> model = GPTNeoXModel.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state GPTNeoXForCausalLM class transformers.GPTNeoXForCausalLM < source > ( config ) Parameters config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GPTNeoX Model with a language modeling head on top for CLM fine-tuning. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None position_ids: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTNeoXForCausalLM, GPTNeoXConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neox-20b") >>> config = GPTNeoXConfig.from_pretrained("EleutherAI/gpt-neox-20b") >>> config.is_decoder = True >>> model = GPTNeoXForCausalLM.from_pretrained("EleutherAI/gpt-neox-20b", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits GPTNeoXForQuestionAnswering class transformers.GPTNeoXForQuestionAnswering < source > ( config ) Parameters config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-NeoX Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-neox-20b instead of trl-internal-testing/tiny-random-GPTNeoXForCausalLM. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> from transformers import AutoTokenizer, GPTNeoXForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> model = GPTNeoXForQuestionAnswering.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss GPTNeoXForSequenceClassification class transformers.GPTNeoXForSequenceClassification < source > ( config ) Parameters config (~GPTNeoXConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPTNeoX Model transformer with a sequence classification head on top (linear layer). GPTNeoXForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None position_ids: typing.Optional[torch.LongTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape ({0}), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPTNeoXForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> model = GPTNeoXForSequenceClassification.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = GPTNeoXForSequenceClassification.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPTNeoXForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM") >>> model = GPTNeoXForSequenceClassification.from_pretrained("trl-internal-testing/tiny-random-GPTNeoXForCausalLM", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = GPTNeoXForSequenceClassification.from_pretrained( ... "trl-internal-testing/tiny-random-GPTNeoXForCausalLM", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss GPTNeoXForTokenClassification class transformers.GPTNeoXForTokenClassification < source > ( config ) forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape ({0}), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTNeoXForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("LarsJonasson/pythia-410m-deduped-sft-swedish") >>> model = GPTNeoXForTokenClassification.from_pretrained("LarsJonasson/pythia-410m-deduped-sft-swedish") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.25
https://huggingface.co/docs/transformers/model_doc/gpt_neox_japanese
GPT-NeoX-Japanese Overview We introduce GPT-NeoX-Japanese, which is an autoregressive language model for Japanese, trained on top of https://github.com/EleutherAI/gpt-neox. Japanese is a unique language with its large vocabulary and a combination of hiragana, katakana, and kanji writing scripts. To address this distinct structure of the Japanese language, we use a special sub-word tokenizer. We are very grateful to tanreinama for open-sourcing this incredibly helpful tokenizer. Following the recommendations from Google’s research on PaLM, we have removed bias parameters from transformer blocks, achieving better model performance. Please refer this article in detail. Development of the model was led by Shinya Otani, Takayoshi Makabe, Anuj Arora, and Kyo Hattori from ABEJA, Inc.. For more information on this model-building activity, please refer here (ja). Generation The generate() method can be used to generate text using GPT NeoX Japanese model. >>> from transformers import GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseTokenizer >>> model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> prompt = "人とAIが協調するためには、" >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens, skip_special_tokens=True)[0] >>> print(gen_text) 人とAIが協調するためには、AIと人が共存し、AIを正しく理解する必要があります。 Documentation resources Causal language modeling task guide GPTNeoXJapaneseConfig class transformers.GPTNeoXJapaneseConfig < source > ( vocab_size = 32000 hidden_size = 2560 num_hidden_layers = 32 num_attention_heads = 32 intermediate_multiple_size = 4 hidden_act = 'gelu' rotary_pct = 1.0 rotary_emb_base = 10000 max_position_embeddings = 2048 initializer_range = 0.02 layer_norm_eps = 1e-05 use_cache = True bos_token_id = 31996 eos_token_id = 31999 attention_dropout = 0.1 hidden_dropout = 0.0 **kwargs ) Parameters vocab_size (int, optional, defaults to 32000) — Vocabulary size of the GPTNeoXJapanese model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPTNeoXJapanese. hidden_size (int, optional, defaults to 2560) — Dimension of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 32) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 32) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_multiple_size (int, optional, defaults to 4) — Dimension of the “intermediate” layer in the Transformer encoder is calculated by hidden_size * intermediate_multiple_size. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. rotary_pct (float, optional, defaults to 1.00) — percentage of hidden dimensions to allocate to rotary embeddings rotary_emb_base (int, optional, defaults to 10000) — base for computing rotary embeddings frequency max_position_embeddings (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. attention_dropout (float, optional, defaults to 0.1) — The dropout ratio for the attention. hidden_dropout (float, optional, defaults to 0.0) — The dropout ratio for the hidden layer. Example — This is the configuration class to store the configuration of a GPTNeoXModelJapanese. It is used to instantiate a GPTNeoX model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPTNeoXJapanese abeja/gpt-neox-japanese-2.7b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Default configs is set as 2.7B model >>> from transformers import GPTNeoXJapaneseConfig, GPTNeoXJapaneseModel >>> >>> configuration = GPTNeoXJapaneseConfig() >>> >>> model = GPTNeoXJapaneseModel(configuration) >>> >>> configuration = model.config GPTNeoXJapaneseTokenizer class transformers.GPTNeoXJapaneseTokenizer < source > ( vocab_file emoji_file unk_token = '<|endoftext|>' pad_token = '<|endoftext|>' bos_token = '<|startoftext|>' eos_token = '<|endoftext|>' do_clean_text = False **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. emoji_file (str) — File containing the emoji. unk_token (str, optional, defaults to "<|endoftext|>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "<|endoftext|>") — The token used for padding bos_token (str, optional, defaults to "<|startoftext|>") — The beginning of sequence token. eos_token (str, optional, defaults to "<|endoftext|>") — The end of sequence token. do_clean_text (bool, optional, defaults to False) — Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE. This tokenizer inherits from PreTrainedTokenizer and is based on Japanese special Sub-Word-Encoding that is used in this repository (https://github.com/tanreinama/Japanese-BPEEncoder_V2). Check the repository for details. Japanese has a relatively large vocabulary and there is no separation between words. Furthermore, the language is a combination of hiragana, katakana, and kanji, and variants such as “1” and “①” are often used. In order to cope with these, this tokenizer has the following features Subword-by-subword segmentation, which is intermediate between byte strings and morphological analysis. BPEs are created for each Kanji, Hiragana, and Katakana character, and there are no BPEs that cross character types, such as Kanji + Hiragana or Hiragana + Katakana. All-byte encoding that does not require <unk>. Independent of UTF codes such as 2-byte and 3-byte characters Conversion of heterographs to the same token_id Emoji and Emoticon are grouped into 12 types as special tags. Example: >>> from transformers import GPTNeoXJapaneseTokenizer >>> tokenizer = GPTNeoXJapaneseTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> >>> tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"] [30014, 26883, 26638, 27228, 25, 26650, 31732, 31679, 27809, 26638, 17749, 31592, 17749, 31593, 321, 1281] >>> >>> tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]) '吾輩は猫である🐯。実は慶応(慶応)大学出身' Converts a sequence of tokens (string) in a single string. GPTNeoXJapaneseModel class transformers.GPTNeoXJapaneseModel < source > ( config ) Parameters config (~GPTNeoXJapaneseConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPTNeoXJapanese Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXJapaneseConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXJapaneseModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTNeoXJapaneseModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> model = GPTNeoXJapaneseModel.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> inputs = tokenizer("日本語のGPT-neoxがHugging Faceで使えます😀", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state GPTNeoXJapaneseForCausalLM class transformers.GPTNeoXJapaneseForCausalLM < source > ( config ) Parameters config (~GPTNeoXJapaneseConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GPTNeoXJapanese Model with a language modeling head on top for Classifier Model fine-tuning. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). The two additional tensors are only required when the model is used as a decoder in a Sequence to Sequence model. Contains pre-computed hidden-states (key and values in the self-attention blocks that can be used (see past_key_values input) to speed up sequential decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels n [0, ..., config.vocab_size]. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoXJapaneseConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoXJapaneseForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTNeoXJapaneseForCausalLM, GPTNeoXJapaneseConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> config = GPTNeoXJapaneseConfig.from_pretrained("abeja/gpt-neox-japanese-2.7b") >>> config.is_decoder = True >>> model = GPTNeoXJapaneseForCausalLM.from_pretrained("abeja/gpt-neox-japanese-2.7b", config=config) >>> inputs = tokenizer("日本語のGPT-neoxがHugging Faceで使えます😀", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/gpt_neo
GPT Neo Overview The GPTNeo model was released in the EleutherAI/gpt-neo repository by Sid Black, Stella Biderman, Leo Gao, Phil Wang and Connor Leahy. It is a GPT2 like causal language model trained on the Pile dataset. The architecture is similar to GPT2 except that GPT Neo uses local attention in every other layer with a window size of 256 tokens. This model was contributed by valhalla. Generation The generate() method can be used to generate text using GPT Neo model. >>> from transformers import GPTNeoForCausalLM, GPT2Tokenizer >>> model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> tokenizer = GPT2Tokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> prompt = ( ... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " ... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " ... "researchers was the fact that the unicorns spoke perfect English." ... ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] Documentation resources Text classification task guide Causal language modeling task guide GPTNeoConfig class transformers.GPTNeoConfig < source > ( vocab_size = 50257 max_position_embeddings = 2048 hidden_size = 2048 num_layers = 24 attention_types = [[['global', 'local'], 12]] num_heads = 16 intermediate_size = None window_size = 256 activation_function = 'gelu_new' resid_dropout = 0.0 embed_dropout = 0.0 attention_dropout = 0.0 classifier_dropout = 0.1 layer_norm_epsilon = 1e-05 initializer_range = 0.02 use_cache = True bos_token_id = 50256 eos_token_id = 50256 **kwargs ) Parameters vocab_size (int, optional, defaults to 50257) — Vocabulary size of the GPT Neo model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPTNeoModel. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of GPTNeoModel. max_position_embeddings (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_size (int, optional, defaults to 2048) — Dimensionality of the encoder layers and the pooler layer. num_layers (int, optional, defaults to 24) — Number of hidden layers in the Transformer encoder. attention_types (List, optional, defaults to [[["global", "local"], 12]]) — The type of attention for each layer in a List of the following format [[["attention_type"], num_layerss]] e.g. for a 24 layer model [[["global"], 24]] or [[["global", "local"], 12]] Choose the value of attention_type from ["global", "local"] num_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 8192) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. window_size (int, optional, defaults to 256) — The size of the sliding window for local attention. activation_function (str or function, optional, defaults to "gelu_new") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. resid_dropout (float, optional, defaults to 0.0) — Residual dropout used in the attention pattern. embed_dropout (float, optional, defaults to 0.0) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. classifier_dropout (float, optional, defaults to 0.1) — Argument used when doing token classification, used in the model GPTNeoForTokenClassification. The dropout ratio for the hidden layer. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. bos_token_id (int, optional, defaults to 50256) — The id of the beginning of sentence token in the vocabulary. eos_token_id (int, optional, defaults to 50256) — The id of the end of sentence token in the vocabulary. This is the configuration class to store the configuration of a GPTNeoModel. It is used to instantiate a GPT Neo model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPTNeo EleutherAI/gpt-neo-1.3B architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GPTNeoConfig, GPTNeoModel >>> >>> configuration = GPTNeoConfig() >>> >>> model = GPTNeoModel(configuration) >>> >>> configuration = model.config GPTNeoModel class transformers.GPTNeoModel < source > ( config ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPT Neo Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The GPTNeoModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTNeoModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = GPTNeoModel.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state GPTNeoForCausalLM class transformers.GPTNeoForCausalLM < source > ( config ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT Neo Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The GPTNeoForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, GPTNeoForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits GPTNeoForQuestionAnswering class transformers.GPTNeoForQuestionAnswering < source > ( config ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-Neo Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-neo-1.3B instead of EleutherAI/gpt-neo-1.3B. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> from transformers import AutoTokenizer, GPTNeoForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = GPTNeoForQuestionAnswering.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss GPTNeoForSequenceClassification class transformers.GPTNeoForSequenceClassification < source > ( config ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPTNeo Model transformer with a sequence classification head on top (linear layer). GPTNeoForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPTNeoForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = GPTNeoForSequenceClassification.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = GPTNeoForSequenceClassification.from_pretrained("EleutherAI/gpt-neo-1.3B", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPTNeoForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = GPTNeoForSequenceClassification.from_pretrained("EleutherAI/gpt-neo-1.3B", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = GPTNeoForSequenceClassification.from_pretrained( ... "EleutherAI/gpt-neo-1.3B", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss GPTNeoForTokenClassification class transformers.GPTNeoForTokenClassification < source > ( config ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GPT Neo model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.num_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTNeoForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTNeoForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125m") >>> model = GPTNeoForTokenClassification.from_pretrained("EleutherAI/gpt-neo-125m") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.25 FlaxGPTNeoModel class transformers.FlaxGPTNeoModel < source > ( config: GPTNeoConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The bare GPTNeo Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None position_ids = None params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxGPTNeoPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxGPTNeoModel >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = FlaxGPTNeoModel.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxGPTNeoForCausalLM class transformers.FlaxGPTNeoForCausalLM < source > ( config: GPTNeoConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (GPTNeoConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The GPTNeo Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None position_ids = None params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTNeoConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxGPTNeoPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxGPTNeoForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> model = FlaxGPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1]
https://huggingface.co/docs/transformers/model_doc/funnel
Funnel Transformer Overview The Funnel Transformer model was proposed in the paper Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing. It is a bidirectional transformer model, like BERT, but with a pooling operation after each block of layers, a bit like in traditional convolutional neural networks (CNN) in computer vision. The abstract from the paper is the following: With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension. Tips: Since Funnel Transformer uses pooling, the sequence length of the hidden states changes after each block of layers. This way, their length is divided by 2, which speeds up the computation of the next hidden states. The base model therefore has a final sequence length that is a quarter of the original one. This model can be used directly for tasks that just require a sentence summary (like sequence classification or multiple choice). For other tasks, the full model is used; this full model has a decoder that upsamples the final hidden states to the same sequence length as the input. For tasks such as classification, this is not a problem, but for tasks like masked language modeling or token classification, we need a hidden state with the same sequence length as the original input. In those cases, the final hidden states are upsampled to the input sequence length and go through two additional layers. That’s why there are two versions of each checkpoint. The version suffixed with “-base” contains only the three blocks, while the version without that suffix contains the three blocks and the upsampling head with its additional layers. The Funnel Transformer checkpoints are all available with a full version and a base version. The first ones should be used for FunnelModel, FunnelForPreTraining, FunnelForMaskedLM, FunnelForTokenClassification and FunnelForQuestionAnswering. The second ones should be used for FunnelBaseModel, FunnelForSequenceClassification and FunnelForMultipleChoice. This model was contributed by sgugger. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide FunnelConfig class transformers.FunnelConfig < source > ( vocab_size = 30522 block_sizes = [4, 4, 4] block_repeats = None num_decoder_layers = 2 d_model = 768 n_head = 12 d_head = 64 d_inner = 3072 hidden_act = 'gelu_new' hidden_dropout = 0.1 attention_dropout = 0.1 activation_dropout = 0.0 initializer_range = 0.1 initializer_std = None layer_norm_eps = 1e-09 pooling_type = 'mean' attention_type = 'relative_shift' separate_cls = True truncate_seq = True pool_q_only = True **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the Funnel transformer. Defines the number of different tokens that can be represented by the inputs_ids passed when calling FunnelModel or TFFunnelModel. block_sizes (List[int], optional, defaults to [4, 4, 4]) — The sizes of the blocks used in the model. block_repeats (List[int], optional) — If passed along, each layer of each block is repeated the number of times indicated. num_decoder_layers (int, optional, defaults to 2) — The number of layers in the decoder (when not using the base model). d_model (int, optional, defaults to 768) — Dimensionality of the model’s hidden states. n_head (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. d_head (int, optional, defaults to 64) — Dimensionality of the model’s heads. d_inner (int, optional, defaults to 3072) — Inner dimension in the feed-forward blocks. hidden_act (str or callable, optional, defaults to "gelu_new") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.1) — The dropout probability for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout probability used between the two layers of the feed-forward blocks. initializer_range (float, optional, defaults to 0.1) — The upper bound of the uniform initializer for initializing all weight matrices in attention layers. initializer_std (float, optional) — The standard deviation of the normal initializer for initializing the embedding matrix and the weight of linear layers. Will default to 1 for the embedding matrix and the value given by Xavier initialization for linear layers. layer_norm_eps (float, optional, defaults to 1e-9) — The epsilon used by the layer normalization layers. pooling_type (str, optional, defaults to "mean") — Possible values are "mean" or "max". The way pooling is performed at the beginning of each block. attention_type (str, optional, defaults to "relative_shift") — Possible values are "relative_shift" or "factorized". The former is faster on CPU/GPU while the latter is faster on TPU. separate_cls (bool, optional, defaults to True) — Whether or not to separate the cls token when applying pooling. truncate_seq (bool, optional, defaults to False) — When using separate_cls, whether or not to truncate the last token when pooling, to avoid getting a sequence length that is not a multiple of 2. pool_q_only (bool, optional, defaults to False) — Whether or not to apply the pooling only to the query or to query, key and values for the attention layers. This is the configuration class to store the configuration of a FunnelModel or a TFBertModel. It is used to instantiate a Funnel Transformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Funnel Transformer funnel-transformer/small architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. FunnelTokenizer class transformers.FunnelTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '<unk>' sep_token = '<sep>' pad_token = '<pad>' cls_token = '<cls>' mask_token = '<mask>' bos_token = '<s>' eos_token = '</s>' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. do_basic_tokenize (bool, optional, defaults to True) — Whether or not to do basic tokenization before WordPiece. never_split (Iterable, optional) — Collection of tokens which will never be split during tokenization. Only has an effect when do_basic_tokenize=True unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "<sep>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "<cls>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. bos_token (str, optional, defaults to "<s>") — The beginning of sentence token. eos_token (str, optional, defaults to "</s>") — The end of sentence token. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original BERT). Construct a Funnel Transformer tokenizer. Based on WordPiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Funnel Transformer sequence pair mask has the following format: 2 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) FunnelTokenizerFast class transformers.FunnelTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '<unk>' sep_token = '<sep>' pad_token = '<pad>' cls_token = '<cls>' mask_token = '<mask>' bos_token = '<s>' eos_token = '</s>' clean_text = True tokenize_chinese_chars = True strip_accents = None wordpieces_prefix = '##' **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "<sep>") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "<pad>") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "<cls>") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "<mask>") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. clean_text (bool, optional, defaults to True) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). bos_token (str, optional, defaults to "<s>") — The beginning of sentence token. eos_token (str, optional, defaults to "</s>") — The end of sentence token. strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original BERT). wordpieces_prefix (str, optional, defaults to "##") — The prefix for subwords. Construct a “fast” Funnel Transformer tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A Funnel sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A Funnel Transformer sequence pair mask has the following format: 2 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). Funnel specific outputs class transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss of the ELECTRA-style objective. logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of FunnelForPreTraining. class transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput < source > ( logits: tf.Tensor = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of FunnelForPreTraining. FunnelBaseModel class transformers.FunnelBaseModel < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called decoder) or any task-specific head on top. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelBaseModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FunnelBaseModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = FunnelBaseModel.from_pretrained("funnel-transformer/small-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FunnelModel class transformers.FunnelModel < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FunnelModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = FunnelModel.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FunnelModelForPreTraining class transformers.FunnelForPreTraining < source > ( config: FunnelConfig ) forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the ELECTRA-style loss. Input should be a sequence of tokens (see input_ids docstring) Indices should be in [0, 1]: 0 indicates the token is an original token, 1 indicates the token was replaced. A transformers.models.funnel.modeling_funnel.FunnelForPreTrainingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (optional, returned when labels is provided, torch.FloatTensor of shape (1,)) — Total loss of the ELECTRA-style objective. logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, FunnelForPreTraining >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = FunnelForPreTraining.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> logits = model(**inputs).logits FunnelForMaskedLM class transformers.FunnelForMaskedLM < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Transformer Model with a language modeling head on top. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FunnelForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = FunnelForMaskedLM.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) FunnelForSequenceClassification class transformers.FunnelForSequenceClassification < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Transformer Model with a sequence classification/regression head on top (two linear layer on top of the first timestep of the last hidden state) e.g. for GLUE tasks. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, FunnelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = FunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = FunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, FunnelForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = FunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = FunnelForSequenceClassification.from_pretrained( ... "funnel-transformer/small-base", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss FunnelForMultipleChoice class transformers.FunnelForMultipleChoice < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Transformer Model with a multiple choice classification head on top (two linear layer on top of the first timestep of the last hidden state, and a softmax) e.g. for RocStories/SWAG tasks. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FunnelForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = FunnelForMultipleChoice.from_pretrained("funnel-transformer/small-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits FunnelForTokenClassification class transformers.FunnelForTokenClassification < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Transformer Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FunnelForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = FunnelForTokenClassification.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss FunnelForQuestionAnswering class transformers.FunnelForQuestionAnswering < source > ( config: FunnelConfig ) Parameters config (FunnelConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Transformer Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FunnelForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FunnelForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = FunnelForQuestionAnswering.from_pretrained("funnel-transformer/small") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss TFFunnelBaseModel class transformers.TFFunnelBaseModel < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The base Funnel Transformer Model transformer outputting raw hidden-states without upsampling head (also called decoder) or any task-specific head on top. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelBaseModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelBaseModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = TFFunnelBaseModel.from_pretrained("funnel-transformer/small-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFFunnelModel class transformers.TFFunnelModel < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Funnel Transformer Model transformer outputting raw hidden-states without any specific head on top. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = TFFunnelModel.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFFunnelModelForPreTraining class transformers.TFFunnelForPreTraining < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel model with a binary classification head on top as used during pretraining for identifying generated tokens. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False **kwargs ) → transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.models.funnel.modeling_tf_funnel.TFFunnelForPreTrainingOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. logits (tf.Tensor of shape (batch_size, sequence_length)) — Prediction scores of the head (scores for each token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelForPreTraining forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, TFFunnelForPreTraining >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = TFFunnelForPreTraining.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(inputs).logits TFFunnelForMaskedLM class transformers.TFFunnelForMaskedLM < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Model with a language modeling head on top. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = TFFunnelForMaskedLM.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) TFFunnelForSequenceClassification class transformers.TFFunnelForSequenceClassification < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = TFFunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFFunnelForSequenceClassification.from_pretrained("funnel-transformer/small-base", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFFunnelForMultipleChoice class transformers.TFFunnelForMultipleChoice < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small-base") >>> model = TFFunnelForMultipleChoice.from_pretrained("funnel-transformer/small-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits TFFunnelForTokenClassification class transformers.TFFunnelForTokenClassification < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = TFFunnelForTokenClassification.from_pretrained("funnel-transformer/small") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) TFFunnelForQuestionAnswering class transformers.TFFunnelForQuestionAnswering < source > ( *args **kwargs ) Parameters config (XxxConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Funnel Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The Funnel Transformer model was proposed in Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing by Zihang Dai, Guokun Lai, Yiming Yang, Quoc V. Le. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: bool = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (FunnelConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFFunnelForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFFunnelForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("funnel-transformer/small") >>> model = TFFunnelForQuestionAnswering.from_pretrained("funnel-transformer/small") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss)
https://huggingface.co/docs/transformers/model_doc/gpt2
OpenAI GPT2 Overview OpenAI GPT-2 model was proposed in Language Models are Unsupervised Multitask Learners by Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei and Ilya Sutskever from OpenAI. It’s a causal (unidirectional) transformer pretrained using language modeling on a very large corpus of ~40 GB of text data. The abstract from the paper is the following: GPT-2 is a large transformer-based language model with 1.5 billion parameters, trained on a dataset[1] of 8 million web pages. GPT-2 is trained with a simple objective: predict the next word, given all of the previous words within some text. The diversity of the dataset causes this simple goal to contain naturally occurring demonstrations of many tasks across diverse domains. GPT-2 is a direct scale-up of GPT, with more than 10X the parameters and trained on more than 10X the amount of data. Tips: GPT-2 is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. GPT-2 was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script. The model can take the past_key_values (for PyTorch) or past (for TF) as input, which is the previously computed key/value attention pairs. Using this (past_key_values or past) value prevents the model from re-computing pre-computed values in the context of text generation. For PyTorch, see past_key_values argument of the GPT2Model.forward() method, or for TF the past argument of the TFGPT2Model.call() method for more information on its usage. Enabling the scale_attn_by_inverse_layer_idx and reorder_and_upcast_attn flags will apply the training stability improvements from Mistral (for PyTorch only). Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT-2 is one of them and is available in five different sizes: small, medium, large, xl and a distilled version of the small checkpoint: distilgpt-2. This model was contributed by thomwolf. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT2. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Text Generation A blog on how to Finetune a non-English GPT-2 Model with Hugging Face. A blog on How to generate text: using different decoding methods for language generation with Transformers with GPT-2. A blog on Training CodeParrot 🦜 from Scratch, a large GPT-2 model. A blog on Faster Text Generation with TensorFlow and XLA with GPT-2. A blog on How to train a Language Model with Megatron-LM with a GPT-2 model. A notebook on how to finetune GPT2 to generate lyrics in the style of your favorite artist. 🌎 A notebook on how to finetune GPT2 to generate tweets in the style of your favorite Twitter user. 🌎 Causal language modeling chapter of the 🤗 Hugging Face Course. GPT2LMHeadModel is supported by this causal language modeling example script, text generation example script, and notebook. TFGPT2LMHeadModel is supported by this causal language modeling example script and notebook. FlaxGPT2LMHeadModel is supported by this causal language modeling example script and notebook. Text classification task guide Token classification task guide Causal language modeling task guide GPT2Config class transformers.GPT2Config < source > ( vocab_size = 50257 n_positions = 1024 n_embd = 768 n_layer = 12 n_head = 12 n_inner = None activation_function = 'gelu_new' resid_pdrop = 0.1 embd_pdrop = 0.1 attn_pdrop = 0.1 layer_norm_epsilon = 1e-05 initializer_range = 0.02 summary_type = 'cls_index' summary_use_proj = True summary_activation = None summary_proj_to_labels = True summary_first_dropout = 0.1 scale_attn_weights = True use_cache = True bos_token_id = 50256 eos_token_id = 50256 scale_attn_by_inverse_layer_idx = False reorder_and_upcast_attn = False **kwargs ) Parameters vocab_size (int, optional, defaults to 50257) — Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPT2Model or TFGPT2Model. n_positions (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (int, optional, defaults to 768) — Dimensionality of the embeddings and hidden states. n_layer (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. n_inner (int, optional, defaults to None) — Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd activation_function (str, optional, defaults to "gelu_new") — Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"]. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the embeddings. attn_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the attention. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. summary_type (string, optional, defaults to "cls_index") — Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and TFGPT2DoubleHeadsModel. Has to be one of the following options: "last": Take the last token hidden state (like XLNet). "first": Take the first token hidden state (like BERT). "mean": Take the mean of all tokens hidden states. "cls_index": Supply a Tensor of classification token position (like GPT/GPT-2). "attn": Not implemented now, use multi-head attention. summary_use_proj (bool, optional, defaults to True) — Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and TFGPT2DoubleHeadsModel. Whether or not to add a projection after the vector extraction. summary_activation (str, optional) — Argument used when doing sequence summary. Used in for the multiple choice head in GPT2DoubleHeadsModel. Pass "tanh" for a tanh activation to the output, any other value will result in no activation. summary_proj_to_labels (bool, optional, defaults to True) — Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and TFGPT2DoubleHeadsModel. Whether the projection outputs should have config.num_labels or config.hidden_size classes. summary_first_dropout (float, optional, defaults to 0.1) — Argument used when doing sequence summary, used in the models GPT2DoubleHeadsModel and TFGPT2DoubleHeadsModel. The dropout ratio to be used after the projection and activation. scale_attn_weights (bool, optional, defaults to True) — Scale attention weights by dividing by sqrt(hidden_size).. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). scale_attn_by_inverse_layer_idx (bool, optional, defaults to False) — Whether to additionally scale attention weights by 1 / layer_idx + 1. reorder_and_upcast_attn (bool, optional, defaults to False) — Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention dot-product/softmax to float() when training with mixed precision. This is the configuration class to store the configuration of a GPT2Model or a TFGPT2Model. It is used to instantiate a GPT-2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT-2 gpt2 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GPT2Config, GPT2Model >>> >>> configuration = GPT2Config() >>> >>> model = GPT2Model(configuration) >>> >>> configuration = model.config GPT2Tokenizer class transformers.GPT2Tokenizer < source > ( vocab_file merges_file errors = 'replace' unk_token = '<|endoftext|>' bos_token = '<|endoftext|>' eos_token = '<|endoftext|>' pad_token = None add_prefix_space = False add_bos_token = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (GPT2 tokenizer detect beginning of words by the preceding space). Construct a GPT-2 tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import GPT2Tokenizer >>> tokenizer = GPT2Tokenizer.from_pretrained("gpt2") >>> tokenizer("Hello world")["input_ids"] [15496, 995] >>> tokenizer(" Hello world")["input_ids"] [18435, 995] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one). This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) GPT2TokenizerFast class transformers.GPT2TokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None unk_token = '<|endoftext|>' bos_token = '<|endoftext|>' eos_token = '<|endoftext|>' add_prefix_space = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. unk_token (str, optional, defaults to <|endoftext|>) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. bos_token (str, optional, defaults to <|endoftext|>) — The beginning of sequence token. eos_token (str, optional, defaults to <|endoftext|>) — The end of sequence token. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (GPT2 tokenizer detect beginning of words by the preceding space). trim_offsets (bool, optional, defaults to True) — Whether or not the post-processing step should trim offsets to avoid including whitespaces. Construct a “fast” GPT-2 tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import GPT2TokenizerFast >>> tokenizer = GPT2TokenizerFast.from_pretrained("gpt2") >>> tokenizer("Hello world")["input_ids"] [15496, 995] >>> tokenizer(" Hello world")["input_ids"] [18435, 995] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. GPT2 specific outputs class transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None mc_loss: typing.Optional[torch.FloatTensor] = None logits: FloatTensor = None mc_logits: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) — Multiple choice classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). past_key_values (Tuple[Tuple[torch.Tensor]], optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of length config.n_layers, containing tuples of tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). GPT2Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. class transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput < source > ( logits: tf.Tensor = None mc_logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (tf.Tensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of models predicting if two sentences are consecutive or not. GPT2Model class transformers.GPT2Model < source > ( config ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The GPT2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPT2Model >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = GPT2Model.from_pretrained("gpt2") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state GPT2LMHeadModel class transformers.GPT2LMHeadModel < source > ( config ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The GPT2LMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, GPT2LMHeadModel >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = GPT2LMHeadModel.from_pretrained("gpt2") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits GPT2DoubleHeadsModel class transformers.GPT2DoubleHeadsModel < source > ( config ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None mc_token_ids: typing.Optional[torch.LongTensor] = None labels: typing.Optional[torch.LongTensor] = None mc_labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. mc_token_ids (torch.LongTensor of shape (batch_size, num_choices), optional, default to index of the last token of the input) — Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1]. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1] mc_labels (torch.LongTensor of shape (batch_size), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (see input_ids above) A transformers.models.gpt2.modeling_gpt2.GPT2DoubleHeadsModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss. mc_loss (torch.FloatTensor of shape (1,), optional, returned when mc_labels is provided) — Multiple choice classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (torch.FloatTensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). past_key_values (Tuple[Tuple[torch.Tensor]], optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of length config.n_layers, containing tuples of tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). GPT2Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPT2DoubleHeadsModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, GPT2DoubleHeadsModel >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = GPT2DoubleHeadsModel.from_pretrained("gpt2") >>> >>> num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"}) >>> >>> embedding_layer = model.resize_token_embeddings(len(tokenizer)) >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] >>> encoded_choices = [tokenizer.encode(s) for s in choices] >>> cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] >>> input_ids = torch.tensor(encoded_choices).unsqueeze(0) >>> mc_token_ids = torch.tensor([cls_token_location]) >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) >>> lm_logits = outputs.logits >>> mc_logits = outputs.mc_logits GPT2ForQuestionAnswering class transformers.GPT2ForQuestionAnswering < source > ( config ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-2 Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPT2ForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use gpt2 instead of gpt2. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> from transformers import AutoTokenizer, GPT2ForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = GPT2ForQuestionAnswering.from_pretrained("gpt2") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss GPT2ForSequenceClassification class transformers.GPT2ForSequenceClassification < source > ( config ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT2 Model transformer with a sequence classification head on top (linear layer). GPT2ForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPT2ForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPT2ForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown") >>> model = GPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = GPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPT2ForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown") >>> model = GPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = GPT2ForSequenceClassification.from_pretrained( ... "microsoft/DialogRPT-updown", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss GPT2ForTokenClassification class transformers.GPT2ForTokenClassification < source > ( config ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GPT2 Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.Tensor]] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPT2ForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPT2ForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("brad1141/gpt2-finetuned-comp2") >>> model = GPT2ForTokenClassification.from_pretrained("brad1141/gpt2-finetuned-comp2") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> predicted_tokens_classes ['Lead', 'Lead', 'Lead', 'Position', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead', 'Lead'] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.25 TFGPT2Model class transformers.TFGPT2Model < source > ( *args **kwargs ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past). Set to False during training, True during generation A transformers.modeling_tf_outputs.TFBaseModelOutputWithPastAndCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The TFGPT2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPT2Model >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = TFGPT2Model.from_pretrained("gpt2") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFGPT2LMHeadModel class transformers.TFGPT2LMHeadModel < source > ( *args **kwargs ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None encoder_hidden_states: np.ndarray | tf.Tensor | None = None encoder_attention_mask: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). encoder_hidden_states (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (Tuple[Tuple[tf.Tensor]] of length config.n_layers) — contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past). Set to False during training, True during generation labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFCausalLMOutputWithCrossAttentions or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The TFGPT2LMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPT2LMHeadModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = TFGPT2LMHeadModel.from_pretrained("gpt2") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits TFGPT2DoubleHeadsModel class transformers.TFGPT2DoubleHeadsModel < source > ( *args **kwargs ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT2 Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None mc_token_ids: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). mc_token_ids (tf.Tensor or Numpy array of shape (batch_size, num_choices), optional, default to index of the last token of the input) — Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1]. A transformers.models.gpt2.modeling_tf_gpt2.TFGPT2DoubleHeadsModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. logits (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). mc_logits (tf.Tensor of shape (batch_size, num_choices)) — Prediction scores of the multiple choice classification head (scores for each choice before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGPT2DoubleHeadsModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import tensorflow as tf >>> from transformers import AutoTokenizer, TFGPT2DoubleHeadsModel >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = TFGPT2DoubleHeadsModel.from_pretrained("gpt2") >>> >>> num_added_tokens = tokenizer.add_special_tokens({"cls_token": "[CLS]"}) >>> embedding_layer = model.resize_token_embeddings( ... len(tokenizer) ... ) >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] >>> encoded_choices = [tokenizer.encode(s) for s in choices] >>> cls_token_location = [tokens.index(tokenizer.cls_token_id) for tokens in encoded_choices] >>> input_ids = tf.constant(encoded_choices)[None, :] >>> mc_token_ids = tf.constant([cls_token_location]) >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) >>> lm_prediction_scores, mc_prediction_scores = outputs[:2] TFGPT2ForSequenceClassification class transformers.TFGPT2ForSequenceClassification < source > ( *args **kwargs ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT2 Model transformer with a sequence classification head on top (linear layer). TFGPT2ForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGPT2ForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPT2ForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/DialogRPT-updown") >>> model = TFGPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFGPT2ForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFSequenceClassifierOutputWithPast class transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast < source > ( loss: tf.Tensor | None = None logits: tf.Tensor = None past_key_values: List[tf.Tensor] | None = None hidden_states: Tuple[tf.Tensor] | None = None attentions: Tuple[tf.Tensor] | None = None ) Parameters loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Base class for outputs of sentence classification models. TFGPT2Tokenizer class transformers.TFGPT2Tokenizer < source > ( *args **kwargs ) Parameters vocab (Dict[str, int]) — Vocabulary dict for Byte Pair Tokenizer merges (List[str]) — Merges list for Byte Pair Tokenizer This is an in-graph tokenizer for GPT2. It should be initialized similarly to other tokenizers, using the from_pretrained() method. It can also be initialized with the from_tokenizer() method, which imports settings from an existing standard tokenizer object. In-graph tokenizers, unlike other Hugging Face tokenizers, are actually Keras layers and are designed to be run when the model is called, rather than during preprocessing. As a result, they have somewhat more limited options than standard tokenizer classes. They are most useful when you want to create an end-to-end model that goes straight from tf.string inputs to outputs. from_config < source > ( config ) Parameters config (Dict) — Dictionary with keys such as stated in get_config. Creates TFGPT2Tokenizer from configurations from_pretrained < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike] *init_inputs **kwargs ) Parameters pretrained_model_name_or_path (Union[str, os.PathLike]) — Path to pretrained model Creates TFGPT2Tokenizer from pretrained GPT2Tokenizer Examples: from transformers import TFGPT2Tokenizer tf_tokenizer = TFGPT2Tokenizer.from_pretrained("gpt2") from_tokenizer < source > ( tokenizer: GPT2Tokenizer *args **kwargs ) Parameters tokenizer (GPT2Tokenizer) — Creates TFGPT2Tokenizer from GPT2Tokenizer Examples: from transformers import AutoTokenizer, TFGPT2Tokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2") tf_tokenizer = TFGPT2Tokenizer.from_tokenizer(tokenizer) FlaxGPT2Model class transformers.FlaxGPT2Model < source > ( config: GPT2Config input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The bare GPT2 Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None position_ids = None encoder_hidden_states: typing.Optional[jax.Array] = None encoder_attention_mask: typing.Optional[jax.Array] = None params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxBaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. last_hidden_state (jnp.ndarray of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(jnp.ndarray) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The FlaxGPT2PreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxGPT2Model >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = FlaxGPT2Model.from_pretrained("gpt2") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxGPT2LMHeadModel class transformers.FlaxGPT2LMHeadModel < source > ( config: GPT2Config input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (GPT2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The GPT2 Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None position_ids = None encoder_hidden_states: typing.Optional[jax.Array] = None encoder_attention_mask: typing.Optional[jax.Array] = None params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxCausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPT2Config) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(jnp.ndarray)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of jnp.ndarray tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The FlaxGPT2PreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxGPT2LMHeadModel >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = FlaxGPT2LMHeadModel.from_pretrained("gpt2") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1]
https://huggingface.co/docs/transformers/model_doc/llama_code
undefined
https://huggingface.co/docs/transformers/model_doc/convnext
ConvNeXT Overview The ConvNeXT model was proposed in A ConvNet for the 2020s by Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, Saining Xie. ConvNeXT is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, that claims to outperform them. The abstract from the paper is the following: The “Roaring 20s” of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model. A vanilla ViT, on the other hand, faces difficulties when applied to general computer vision tasks such as object detection and semantic segmentation. It is the hierarchical Transformers (e.g., Swin Transformers) that reintroduced several ConvNet priors, making Transformers practically viable as a generic vision backbone and demonstrating remarkable performance on a wide variety of vision tasks. However, the effectiveness of such hybrid approaches is still largely credited to the intrinsic superiority of Transformers, rather than the inherent inductive biases of convolutions. In this work, we reexamine the design spaces and test the limits of what a pure ConvNet can achieve. We gradually “modernize” a standard ResNet toward the design of a vision Transformer, and discover several key components that contribute to the performance difference along the way. The outcome of this exploration is a family of pure ConvNet models dubbed ConvNeXt. Constructed entirely from standard ConvNet modules, ConvNeXts compete favorably with Transformers in terms of accuracy and scalability, achieving 87.8% ImageNet top-1 accuracy and outperforming Swin Transformers on COCO detection and ADE20K segmentation, while maintaining the simplicity and efficiency of standard ConvNets. Tips: See the code examples below each model regarding usage. ConvNeXT architecture. Taken from the original paper. This model was contributed by nielsr. TensorFlow version of the model was contributed by ariG23498, gante, and sayakpaul (equal contribution). The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXT. Image Classification ConvNextForImageClassification is supported by this example script and notebook. See also: Image classification task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ConvNextConfig class transformers.ConvNextConfig < source > ( num_channels = 3 patch_size = 4 num_stages = 4 hidden_sizes = None depths = None hidden_act = 'gelu' initializer_range = 0.02 layer_norm_eps = 1e-12 layer_scale_init_value = 1e-06 drop_path_rate = 0.0 image_size = 224 out_features = None out_indices = None **kwargs ) Parameters num_channels (int, optional, defaults to 3) — The number of input channels. patch_size (int, optional, defaults to 4) — Patch size to use in the patch embedding layer. num_stages (int, optional, defaults to 4) — The number of stages in the model. hidden_sizes (List[int], optional, defaults to [96, 192, 384, 768]) — Dimensionality (hidden size) at each stage. depths (List[int], optional, defaults to [3, 3, 9, 3]) — Depth (number of blocks) for each stage. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in each block. If string, "gelu", "relu", "selu" and "gelu_new" are supported. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. layer_scale_init_value (float, optional, defaults to 1e-6) — The initial value for the layer scale. drop_path_rate (float, optional, defaults to 0.0) — The drop rate for stochastic depth. out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. This is the configuration class to store the configuration of a ConvNextModel. It is used to instantiate an ConvNeXT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ConvNeXT facebook/convnext-tiny-224 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ConvNextConfig, ConvNextModel >>> >>> configuration = ConvNextConfig() >>> >>> model = ConvNextModel(configuration) >>> >>> configuration = model.config ConvNextFeatureExtractor ConvNextImageProcessor class transformers.ConvNextImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None crop_pct: float = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overriden by do_resize in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 384}): Resolution of the output image after resize is applied. If size["shortest_edge"] >= 384, the image is resized to (size["shortest_edge"], size["shortest_edge"]). Otherwise, the smaller edge of the image will be matched to int(size["shortest_edge"]/crop_pct), after which the image is cropped to (size["shortest_edge"], size["shortest_edge"]). Only has an effect if do_resize is set to True. Can be overriden by size in the preprocess method. crop_pct (float optional, defaults to 224 / 256) — Percentage of the image to crop. Only has an effect if do_resize is True and size < 384. Can be overriden by crop_pct in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. Can be overriden by resample in the preprocess method. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overriden by do_rescale in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overriden by rescale_factor in the preprocess method. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Constructs a ConvNeXT image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None crop_pct: float = None resample: Resampling = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the output image after resize has been applied. If size["shortest_edge"] >= 384, the image is resized to (size["shortest_edge"], size["shortest_edge"]). Otherwise, the smaller edge of the image will be matched to int(size["shortest_edge"]/ crop_pct), after which the image is cropped to (size["shortest_edge"], size["shortest_edge"]). Only has an effect if do_resize is set to True. crop_pct (float, optional, defaults to self.crop_pct) — Percentage of the image to crop if size < 384. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of PILImageResampling, filters. Only has an effect if do_resize is set to True. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. ConvNextModel class transformers.ConvNextModel < source > ( config ) Parameters config (ConvNextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ConvNext model outputting raw features without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See ConvNextImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvNextConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. The ConvNextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, ConvNextModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224") >>> model = ConvNextModel.from_pretrained("facebook/convnext-tiny-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 768, 7, 7] ConvNextForImageClassification class transformers.ConvNextForImageClassification < source > ( config ) Parameters config (ConvNextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See ConvNextImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvNextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. The ConvNextForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, ConvNextForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224") >>> model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat TFConvNextModel class transformers.TFConvNextModel < source > ( *args **kwargs ) Parameters config (ConvNextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ConvNext model outputting raw features without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with pixel_values only and nothing else: model(pixel_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( pixel_values: TFModelInputType | None = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See ConvNextImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvNextConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvNextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFConvNextModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224") >>> model = TFConvNextModel.from_pretrained("facebook/convnext-tiny-224") >>> inputs = image_processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state TFConvNextForImageClassification class transformers.TFConvNextForImageClassification < source > ( *args **kwargs ) Parameters config (ConvNextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvNext Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with pixel_values only and nothing else: model(pixel_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( pixel_values: TFModelInputType | None = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See ConvNextImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvNextConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvNextForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFConvNextForImageClassification >>> import tensorflow as tf >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnext-tiny-224") >>> model = TFConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224") >>> inputs = image_processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> >>> predicted_class_idx = tf.math.argmax(logits, axis=-1)[0] >>> print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
https://huggingface.co/docs/transformers/model_doc/convbert
ConvBERT Overview The ConvBERT model was proposed in ConvBERT: Improving BERT with Span-based Dynamic Convolution by Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan. The abstract from the paper is the following: Pre-trained language models like BERT and its variants have recently achieved impressive performance in various natural language understanding tasks. However, BERT heavily relies on the global self-attention block and thus suffers large memory footprint and computation cost. Although all its attention heads query on the whole input sequence for generating the attention map from a global perspective, we observe some heads only need to learn local dependencies, which means the existence of computation redundancy. We therefore propose a novel span-based dynamic convolution to replace these self-attention heads to directly model local dependencies. The novel convolution heads, together with the rest self-attention heads, form a new mixed attention block that is more efficient at both global and local context learning. We equip BERT with this mixed attention design and build a ConvBERT model. Experiments have shown that ConvBERT significantly outperforms BERT and its variants in various downstream tasks, with lower training cost and fewer model parameters. Remarkably, ConvBERTbase model achieves 86.4 GLUE score, 0.7 higher than ELECTRAbase, while using less than 1/4 training cost. Code and pre-trained models will be released. ConvBERT training tips are similar to those of BERT. This model was contributed by abhishek. The original implementation can be found here: https://github.com/yitu-opensource/ConvBert Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide ConvBertConfig class transformers.ConvBertConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 embedding_size = 768 head_ratio = 2 conv_kernel_size = 9 num_groups = 1 classifier_dropout = None **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the ConvBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling ConvBertModel or TFConvBertModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling ConvBertModel or TFConvBertModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. head_ratio (int, optional, defaults to 2) — Ratio gamma to reduce the number of attention heads. num_groups (int, optional, defaults to 1) — The number of groups for grouped linear layers for ConvBert model conv_kernel_size (int, optional, defaults to 9) — The size of the convolutional kernel. classifier_dropout (float, optional) — The dropout ratio for the classification head. This is the configuration class to store the configuration of a ConvBertModel. It is used to instantiate an ConvBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ConvBERT YituTech/conv-bert-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ConvBertConfig, ConvBertModel >>> >>> configuration = ConvBertConfig() >>> >>> model = ConvBertModel(configuration) >>> >>> configuration = model.config ConvBertTokenizer class transformers.ConvBertTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. do_basic_tokenize (bool, optional, defaults to True) — Whether or not to do basic tokenization before WordPiece. never_split (Iterable, optional) — Collection of tokens which will never be split during tokenization. Only has an effect when do_basic_tokenize=True unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original ConvBERT). Construct a ConvBERT tokenizer. Based on WordPiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A ConvBERT sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ConvBERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) ConvBertTokenizerFast class transformers.ConvBertTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. clean_text (bool, optional, defaults to True) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original ConvBERT). wordpieces_prefix (str, optional, defaults to "##") — The prefix for subwords. Construct a “fast” ConvBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A ConvBERT sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A ConvBERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). ConvBertModel class transformers.ConvBertModel < source > ( config ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ConvBERT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: A transformers.modeling_outputs.BaseModelOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The ConvBertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ConvBertModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertModel.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state ConvBertForMaskedLM class transformers.ConvBertForMaskedLM < source > ( config ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a language modeling head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ConvBertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ConvBertForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertForMaskedLM.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) ConvBertForSequenceClassification class transformers.ConvBertForSequenceClassification < source > ( config ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ConvBertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, ConvBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = ConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, ConvBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = ConvBertForSequenceClassification.from_pretrained( ... "YituTech/conv-bert-base", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss ConvBertForMultipleChoice class transformers.ConvBertForMultipleChoice < source > ( config ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ConvBertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ConvBertForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertForMultipleChoice.from_pretrained("YituTech/conv-bert-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits ConvBertForTokenClassification class transformers.ConvBertForTokenClassification < source > ( config ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ConvBertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ConvBertForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertForTokenClassification.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss ConvBertForQuestionAnswering class transformers.ConvBertForQuestionAnswering < source > ( config ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The ConvBertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, ConvBertForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = ConvBertForQuestionAnswering.from_pretrained("YituTech/conv-bert-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss TFConvBertModel class transformers.TFConvBertModel < source > ( *args **kwargs ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ConvBERT Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: Optional[Union[np.array, tf.Tensor]] = None token_type_ids: Optional[Union[np.array, tf.Tensor]] = None position_ids: Optional[Union[np.array, tf.Tensor]] = None head_mask: Optional[Union[np.array, tf.Tensor]] = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvBertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFConvBertModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = TFConvBertModel.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFConvBertForMaskedLM class transformers.TFConvBertForMaskedLM < source > ( *args **kwargs ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a language modeling head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvBertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFConvBertForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = TFConvBertForMaskedLM.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) TFConvBertForSequenceClassification class transformers.TFConvBertForSequenceClassification < source > ( *args **kwargs ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model transformer with a sequence classification/regression head on top e.g., for GLUE tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvBertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFConvBertForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = TFConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFConvBertForSequenceClassification.from_pretrained("YituTech/conv-bert-base", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFConvBertForMultipleChoice class transformers.TFConvBertForMultipleChoice < source > ( *args **kwargs ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvBertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFConvBertForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = TFConvBertForMultipleChoice.from_pretrained("YituTech/conv-bert-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits TFConvBertForTokenClassification class transformers.TFConvBertForTokenClassification < source > ( *args **kwargs ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvBertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFConvBertForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = TFConvBertForTokenClassification.from_pretrained("YituTech/conv-bert-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) TFConvBertForQuestionAnswering class transformers.TFConvBertForQuestionAnswering < source > ( *args **kwargs ) Parameters config (ConvBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvBERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: tf.Tensor | None = None end_positions: tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvBertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFConvBertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFConvBertForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("YituTech/conv-bert-base") >>> model = TFConvBertForQuestionAnswering.from_pretrained("YituTech/conv-bert-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss)
https://huggingface.co/docs/transformers/model_doc/gptj
GPT-J Overview The GPT-J model was released in the kingoflolz/mesh-transformer-jax repository by Ben Wang and Aran Komatsuzaki. It is a GPT-2-like causal language model trained on the Pile dataset. This model was contributed by Stella Biderman. Tips: To load GPT-J in float32 one would need at least 2x model size RAM: 1x for initial weights and another 1x to load the checkpoint. So for GPT-J it would take at least 48GB RAM to just load the model. To reduce the RAM usage there are a few options. The torch_dtype argument can be used to initialize the model in half-precision on a CUDA device only. There is also a fp16 branch which stores the fp16 weights, which could be used to further minimize the RAM usage: >>> from transformers import GPTJForCausalLM >>> import torch >>> device = "cuda" >>> model = GPTJForCausalLM.from_pretrained( ... "EleutherAI/gpt-j-6B", ... revision="float16", ... torch_dtype=torch.float16, ... ).to(device) The model should fit on 16GB GPU for inference. For training/fine-tuning it would take much more GPU RAM. Adam optimizer for example makes four copies of the model: model, gradients, average and squared average of the gradients. So it would need at least 4x model size GPU memory, even with mixed precision as gradient updates are in fp32. This is not including the activations and data batches, which would again require some more GPU RAM. So one should explore solutions such as DeepSpeed, to train/fine-tune the model. Another option is to use the original codebase to train/fine-tune the model on TPU and then convert the model to Transformers format for inference. Instructions for that could be found here Although the embedding matrix has a size of 50400, only 50257 entries are used by the GPT-2 tokenizer. These extra tokens are added for the sake of efficiency on TPUs. To avoid the mismatch between embedding matrix size and vocab size, the tokenizer for GPT-J contains 143 extra tokens <|extratoken_1|>... <|extratoken_143|>, so the vocab_size of tokenizer also becomes 50400. Generation The generate() method can be used to generate text using GPT-J model. >>> from transformers import AutoModelForCausalLM, AutoTokenizer >>> model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> prompt = ( ... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " ... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " ... "researchers was the fact that the unicorns spoke perfect English." ... ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] …or in float16 precision: >>> from transformers import GPTJForCausalLM, AutoTokenizer >>> import torch >>> device = "cuda" >>> model = GPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", torch_dtype=torch.float16).to(device) >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> prompt = ( ... "In a shocking finding, scientists discovered a herd of unicorns living in a remote, " ... "previously unexplored valley, in the Andes Mountains. Even more surprising to the " ... "researchers was the fact that the unicorns spoke perfect English." ... ) >>> input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) >>> gen_tokens = model.generate( ... input_ids, ... do_sample=True, ... temperature=0.9, ... max_length=100, ... ) >>> gen_text = tokenizer.batch_decode(gen_tokens)[0] Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GPT-J. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Text Generation Description of GPT-J. A blog on how to Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker. A blog on how to Accelerate GPT-J inference with DeepSpeed-Inference on GPUs. A blog post introducing GPT-J-6B: 6B JAX-Based Transformer. 🌎 A notebook for GPT-J-6B Inference Demo. 🌎 Another notebook demonstrating Inference with GPT-J-6B. Causal language modeling chapter of the 🤗 Hugging Face Course. GPTJForCausalLM is supported by this causal language modeling example script, text generation example script, and notebook. TFGPTJForCausalLM is supported by this causal language modeling example script and notebook. FlaxGPTJForCausalLM is supported by this causal language modeling example script and notebook. Documentation resources Text classification task guide Question answering task guide Causal language modeling task guide GPTJConfig class transformers.GPTJConfig < source > ( vocab_size = 50400 n_positions = 2048 n_embd = 4096 n_layer = 28 n_head = 16 rotary_dim = 64 n_inner = None activation_function = 'gelu_new' resid_pdrop = 0.0 embd_pdrop = 0.0 attn_pdrop = 0.0 layer_norm_epsilon = 1e-05 initializer_range = 0.02 use_cache = True bos_token_id = 50256 eos_token_id = 50256 tie_word_embeddings = False **kwargs ) Parameters vocab_size (int, optional, defaults to 50400) — Vocabulary size of the GPT-J model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPTJModel. n_positions (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (int, optional, defaults to 4096) — Dimensionality of the embeddings and hidden states. n_layer (int, optional, defaults to 28) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. rotary_dim (int, optional, defaults to 64) — Number of dimensions in the embedding that Rotary Position Embedding is applied to. n_inner (int, optional, defaults to None) — Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd activation_function (str, optional, defaults to "gelu_new") — Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"]. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (int, optional, defaults to 0.1) — The dropout ratio for the embeddings. attn_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the attention. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a GPTJModel. It is used to instantiate a GPT-J model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT-J EleutherAI/gpt-j-6B architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GPTJModel, GPTJConfig >>> >>> configuration = GPTJConfig() >>> >>> model = GPTJModel(configuration) >>> >>> configuration = model.config GPTJModel class transformers.GPTJModel < source > ( config ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTJModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-j-6B instead of hf-internal-testing/tiny-random-gptj. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> from transformers import AutoTokenizer, GPTJModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gptj") >>> model = GPTJModel.from_pretrained("hf-internal-testing/tiny-random-gptj") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state GPTJForCausalLM class transformers.GPTJForCausalLM < source > ( config ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-J Model transformer with a language modeling head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTJForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-j-6B instead of hf-internal-testing/tiny-random-gptj. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> import torch >>> from transformers import AutoTokenizer, GPTJForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gptj") >>> model = GPTJForCausalLM.from_pretrained("hf-internal-testing/tiny-random-gptj") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits GPTJForSequenceClassification class transformers.GPTJForSequenceClassification < source > ( config ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-J Model transformer with a sequence classification head on top (linear layer). GPTJForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT, GPT-2, GPT-Neo) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_outputs.SequenceClassifierOutputWithPast or tuple(torch.FloatTensor) A transformers.modeling_outputs.SequenceClassifierOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTJForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-j-6B instead of ydshieh/tiny-random-gptj-for-sequence-classification. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPTJForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification") >>> model = GPTJForSequenceClassification.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = GPTJForSequenceClassification.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, GPTJForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification") >>> model = GPTJForSequenceClassification.from_pretrained("ydshieh/tiny-random-gptj-for-sequence-classification", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = GPTJForSequenceClassification.from_pretrained( ... "ydshieh/tiny-random-gptj-for-sequence-classification", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss GPTJForQuestionAnswering class transformers.GPTJForQuestionAnswering < source > ( config ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.n_positions - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_attention_heads,) or (n_layer, num_attention_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_dim), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GPTJForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. This example uses a random model as the real ones are all very big. To get proper results, you should use EleutherAI/gpt-j-6B instead of hf-internal-testing/tiny-random-gptj. If you get out-of-memory when loading that checkpoint, you can try adding device_map="auto" in the from_pretrained call. Example: >>> from transformers import AutoTokenizer, GPTJForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("hf-internal-testing/tiny-random-gptj") >>> model = GPTJForQuestionAnswering.from_pretrained("hf-internal-testing/tiny-random-gptj") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss TFGPTJModel class transformers.TFGPTJModel < source > ( *args **kwargs ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPT-J Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). use_cache (bool, optional, defaults to True) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past). Set to False during training, True during generation A transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGPTJModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPTJModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> model = TFGPTJModel.from_pretrained("EleutherAI/gpt-j-6B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFGPTJForCausalLM class transformers.TFGPTJForCausalLM < source > ( *args **kwargs ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-J Model transformer with a language modeling head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGPTJForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPTJForCausalLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> model = TFGPTJForCausalLM.from_pretrained("EleutherAI/gpt-j-6B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits TFGPTJForSequenceClassification class transformers.TFGPTJForSequenceClassification < source > ( *args **kwargs ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-J Model transformer with a sequence classification head on top (linear layer). GPTJForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT, GPT-2, GPT-Neo) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None labels: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (np.ndarray or tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutputWithPast or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGPTJForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPTJForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> model = TFGPTJForSequenceClassification.from_pretrained("EleutherAI/gpt-j-6B") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFGPTJForSequenceClassification.from_pretrained("EleutherAI/gpt-j-6B", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFGPTJForQuestionAnswering class transformers.TFGPTJForQuestionAnswering < source > ( *args **kwargs ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT-J Model transformer with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (np.ndarray or tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (np.ndarray or tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGPTJForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFGPTJForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B") >>> model = TFGPTJForQuestionAnswering.from_pretrained("EleutherAI/gpt-j-6B") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) FlaxGPTJModel class transformers.FlaxGPTJModel < source > ( config: GPTJConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The bare GPTJ Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None position_ids = None params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxGPTJPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxGPTJModel >>> tokenizer = AutoTokenizer.from_pretrained("gptj") >>> model = FlaxGPTJModel.from_pretrained("gptj") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxGPTJForCausalLM class transformers.FlaxGPTJForCausalLM < source > ( config: GPTJConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (GPTJConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. dtype (jax.numpy.dtype, optional, defaults to jax.numpy.float32) — The data type of the computation. Can be one of jax.numpy.float32, jax.numpy.float16 (on GPUs) and jax.numpy.bfloat16 (on TPUs). This can be used to enable mixed-precision training or half-precision inference on GPUs or TPUs. If specified all the computation will be performed with the given dtype. Note that this only specifies the dtype of the computation and does not influence the dtype of model parameters. If you wish to change the dtype of the model parameters, see to_fp16() and to_bf16(). The GPTJ Model transformer with a language modeling head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Flax Linen flax.nn.Module subclass. Use it as a regular Flax Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None position_ids = None params: dict = None past_key_values: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length. Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (numpy.ndarray of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. past_key_values (Dict[str, np.ndarray], optional, returned by init_cache or when passing previous past_key_values) — Dictionary of pre-computed hidden-states (key and values in the attention blocks) that can be used for fast auto-regressive decoding. Pre-computed key and value hidden-states are of shape [batch_size, max_length]. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTJConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxGPTJPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxGPTJForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("gptj") >>> model = FlaxGPTJForCausalLM.from_pretrained("gptj") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="np") >>> outputs = model(**inputs) >>> >>> next_token_logits = outputs.logits[:, -1]
https://huggingface.co/docs/transformers/model_doc/conditional_detr
Conditional DETR Overview The Conditional DETR model was proposed in Conditional DETR for Fast Training Convergence by Depu Meng, Xiaokang Chen, Zejia Fan, Gang Zeng, Houqiang Li, Yuhui Yuan, Lei Sun, Jingdong Wang. Conditional DETR presents a conditional cross-attention mechanism for fast DETR training. Conditional DETR converges 6.7× to 10× faster than DETR. The abstract from the paper is the following: The recently-developed DETR approach applies the transformer encoder and decoder architecture to object detection and achieves promising performance. In this paper, we handle the critical issue, slow training convergence, and present a conditional cross-attention mechanism for fast DETR training. Our approach is motivated by that the cross-attention in DETR relies highly on the content embeddings for localizing the four extremities and predicting the box, which increases the need for high-quality content embeddings and thus the training difficulty. Our approach, named conditional DETR, learns a conditional spatial query from the decoder embedding for decoder multi-head cross-attention. The benefit is that through the conditional spatial query, each cross-attention head is able to attend to a band containing a distinct region, e.g., one object extremity or a region inside the object box. This narrows down the spatial range for localizing the distinct regions for object classification and box regression, thus relaxing the dependence on the content embeddings and easing the training. Empirical results show that conditional DETR converges 6.7× faster for the backbones R50 and R101 and 10× faster for stronger backbones DC5-R50 and DC5-R101. Code is available at https://github.com/Atten4Vis/ConditionalDETR. Conditional DETR shows much faster convergence compared to the original DETR. Taken from the original paper. This model was contributed by DepuMeng. The original code can be found here. Documentation resources Object detection task guide ConditionalDetrConfig class transformers.ConditionalDetrConfig < source > ( use_timm_backbone = True backbone_config = None num_channels = 3 num_queries = 300 encoder_layers = 6 encoder_ffn_dim = 2048 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 2048 decoder_attention_heads = 8 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 init_xavier_std = 1.0 auxiliary_loss = False position_embedding_type = 'sine' backbone = 'resnet50' use_pretrained_backbone = True dilation = False class_cost = 2 bbox_cost = 5 giou_cost = 2 mask_loss_coefficient = 1 dice_loss_coefficient = 1 cls_loss_coefficient = 2 bbox_loss_coefficient = 5 giou_loss_coefficient = 2 focal_alpha = 0.25 **kwargs ) Parameters use_timm_backbone (bool, optional, defaults to True) — Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone API. backbone_config (PretrainedConfig or dict, optional) — The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which case it will default to ResNetConfig(). num_channels (int, optional, defaults to 3) — The number of input channels. num_queries (int, optional, defaults to 100) — Number of object queries, i.e. detection slots. This is the maximal number of objects ConditionalDetrModel can detect in a single image. For COCO, we recommend 100 queries. d_model (int, optional, defaults to 256) — Dimension of the layers. encoder_layers (int, optional, defaults to 6) — Number of encoder layers. decoder_layers (int, optional, defaults to 6) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 2048) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 2048) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "relu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. init_xavier_std (float, optional, defaults to 1) — The scaling factor used for the Xavier initialization gain in the HM Attention map module. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. auxiliary_loss (bool, optional, defaults to False) — Whether auxiliary decoding losses (loss at each decoder layer) are to be used. position_embedding_type (str, optional, defaults to "sine") — Type of position embeddings to be used on top of the image features. One of "sine" or "learned". backbone (str, optional, defaults to "resnet50") — Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional backbone from the timm package. For a list of all available models, see this page. use_pretrained_backbone (bool, optional, defaults to True) — Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True. dilation (bool, optional, defaults to False) — Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when use_timm_backbone = True. class_cost (float, optional, defaults to 1) — Relative weight of the classification error in the Hungarian matching cost. bbox_cost (float, optional, defaults to 5) — Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost. giou_cost (float, optional, defaults to 2) — Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost. mask_loss_coefficient (float, optional, defaults to 1) — Relative weight of the Focal loss in the panoptic segmentation loss. dice_loss_coefficient (float, optional, defaults to 1) — Relative weight of the DICE/F-1 loss in the panoptic segmentation loss. bbox_loss_coefficient (float, optional, defaults to 5) — Relative weight of the L1 bounding box loss in the object detection loss. giou_loss_coefficient (float, optional, defaults to 2) — Relative weight of the generalized IoU loss in the object detection loss. eos_coefficient (float, optional, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss. focal_alpha (float, optional, defaults to 0.25) — Alpha parameter in the focal loss. This is the configuration class to store the configuration of a ConditionalDetrModel. It is used to instantiate a Conditional DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Conditional DETR microsoft/conditional-detr-resnet-50 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import ConditionalDetrConfig, ConditionalDetrModel >>> >>> configuration = ConditionalDetrConfig() >>> >>> model = ConditionalDetrModel(configuration) >>> >>> configuration = model.config ConditionalDetrImageProcessor class transformers.ConditionalDetrImageProcessor < source > ( format: typing.Union[str, transformers.models.conditional_detr.image_processing_conditional_detr.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float]] = None image_std: typing.Union[float, typing.List[float]] = None do_pad: bool = True **kwargs ) Parameters format (str, optional, defaults to "coco_detection") — Data format of the annotations. One of “coco_detection” or “coco_panoptic”. do_resize (bool, optional, defaults to True) — Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}): Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. do_rescale (bool, optional, defaults to True) — Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize — Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) — Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) — Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_std parameter in the preprocess method. do_pad (bool, optional, defaults to True) — Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be overridden by the do_pad parameter in the preprocess method. Constructs a Conditional Detr image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None return_segmentation_masks: bool = None masks_path: typing.Union[str, pathlib.Path, NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Union[int, float, NoneType] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_pad: typing.Optional[bool] = None format: typing.Union[str, transformers.models.conditional_detr.image_processing_conditional_detr.AnnotionFormat, NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. annotations (AnnotationType or List[AnnotationType], optional) — List of annotations associated with the image or batch of images. If annotation is for object detection, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “annotations” (List[Dict]): List of annotations for an image. Each annotation should be a dictionary. An image can have no annotations, in which case the list should be empty. If annotation is for segmentation, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary. An image can have no segments, in which case the list should be empty. “file_name” (str): The file name of the image. return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks. masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use when resizing the image. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to use when rescaling the image. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Mean to use when normalizing the image. image_std (float or List[float], optional, defaults to self.image_std) — Standard deviation to use when normalizing the image. do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image. format (str or AnnotionFormat, optional, defaults to self.format) — Format of the annotations. return_tensors (str or TensorType, optional, defaults to self.return_tensors) — Type of tensors to return. If None, will return the list of images. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or a batch of images so that it can be used by the model. post_process_object_detection < source > ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None top_k: int = 100 ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If left to None, predictions will not be resized. top_k (int, optional, defaults to 100) — Keep only top k bounding boxes before filtering by thresholding. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of ConditionalDetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. post_process_instance_segmentation < source > ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict] Parameters outputs (ConditionalDetrForSegmentation) — Raw outputs of the model. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. return_coco_annotation (bool, optional) — Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to True. Set to None if no mask if found above threshold. segments_info — A dictionary that contains additional information on each segment. id — An integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. score — Prediction score of segment with segment_id. Converts the output of ConditionalDetrForSegmentation into instance segmentation predictions. Only supports PyTorch. post_process_semantic_segmentation < source > ( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor] Parameters outputs (ConditionalDetrForSegmentation) — Raw outputs of the model. target_sizes (List[Tuple[int, int]], optional) — A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized. Returns List[torch.Tensor] A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id. Converts the output of ConditionalDetrForSegmentation into semantic segmentation maps. Only supports PyTorch. post_process_panoptic_segmentation < source > ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None ) → List[Dict] Parameters outputs (ConditionalDetrForSegmentation) — The outputs from ConditionalDetrForSegmentation. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. label_ids_to_fuse (Set[int], optional) — The labels in this state will have all their instances be fused together. For instance we could say there can only be one sky in an image, but several persons, so the label ID for sky would be in that set, but not the one for person. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction in batch. If unset, predictions will not be resized. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to the corresponding target_sizes entry. segments_info — A dictionary that contains additional information on each segment. id — an integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise. Multiple instances of the same class / label were fused and assigned a single segment_id. score — Prediction score of segment with segment_id. Converts the output of ConditionalDetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch. ConditionalDetrFeatureExtractor Preprocess an image or a batch of images. ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None top_k: int = 100 ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If left to None, predictions will not be resized. top_k (int, optional, defaults to 100) — Keep only top k bounding boxes before filtering by thresholding. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of ConditionalDetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict] Parameters outputs (ConditionalDetrForSegmentation) — Raw outputs of the model. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. return_coco_annotation (bool, optional) — Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to True. Set to None if no mask if found above threshold. segments_info — A dictionary that contains additional information on each segment. id — An integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. score — Prediction score of segment with segment_id. Converts the output of ConditionalDetrForSegmentation into instance segmentation predictions. Only supports PyTorch. ( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor] Parameters outputs (ConditionalDetrForSegmentation) — Raw outputs of the model. target_sizes (List[Tuple[int, int]], optional) — A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized. A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id. Converts the output of ConditionalDetrForSegmentation into semantic segmentation maps. Only supports PyTorch. ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None ) → List[Dict] Parameters outputs (ConditionalDetrForSegmentation) — The outputs from ConditionalDetrForSegmentation. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. label_ids_to_fuse (Set[int], optional) — The labels in this state will have all their instances be fused together. For instance we could say there can only be one sky in an image, but several persons, so the label ID for sky would be in that set, but not the one for person. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction in batch. If unset, predictions will not be resized. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to the corresponding target_sizes entry. segments_info — A dictionary that contains additional information on each segment. id — an integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise. Multiple instances of the same class / label were fused and assigned a single segment_id. score — Prediction score of segment with segment_id. Converts the output of ConditionalDetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch. ConditionalDetrModel class transformers.ConditionalDetrModel < source > ( config: ConditionalDetrConfig ) Parameters config (ConditionalDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Conditional DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ConditionalDetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrModelOutput or tuple(torch.FloatTensor) A transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConditionalDetrConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a layernorm. The ConditionalDetrModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, AutoModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50") >>> model = AutoModel.from_pretrained("microsoft/conditional-detr-resnet-50") >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> >>> outputs = model(**inputs) >>> >>> >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 300, 256] ConditionalDetrForObjectDetection class transformers.ConditionalDetrForObjectDetection < source > ( config: ConditionalDetrConfig ) Parameters config (ConditionalDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CONDITIONAL_DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.LongTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrObjectDetectionOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ConditionalDetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4). Returns transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrObjectDetectionOutput or tuple(torch.FloatTensor) A transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConditionalDetrConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized bounding boxes. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The ConditionalDetrForObjectDetection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, AutoModelForObjectDetection >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50") >>> model = AutoModelForObjectDetection.from_pretrained("microsoft/conditional-detr-resnet-50") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> target_sizes = torch.tensor([image.size[::-1]]) >>> results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[ ... 0 ... ] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected remote with confidence 0.833 at location [38.31, 72.1, 177.63, 118.45] Detected cat with confidence 0.831 at location [9.2, 51.38, 321.13, 469.0] Detected cat with confidence 0.804 at location [340.3, 16.85, 642.93, 370.95] Detected remote with confidence 0.683 at location [334.48, 73.49, 366.37, 190.01] Detected couch with confidence 0.535 at location [0.52, 1.19, 640.35, 475.1] ConditionalDetrForSegmentation class transformers.ConditionalDetrForSegmentation < source > ( config: ConditionalDetrConfig ) Parameters config (ConditionalDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. CONDITIONAL_DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top, for tasks such as COCO panoptic. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See ConditionalDetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss, DICE/F-1 loss and Focal loss. List of dicts, each dictionary containing at least the following 3 keys: ‘class_labels’, ‘boxes’ and ‘masks’ (the class labels, bounding boxes and segmentation masks of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the masks a torch.FloatTensor of shape (number of bounding boxes in the image, height, width). Returns transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput or tuple(torch.FloatTensor) A transformers.models.conditional_detr.modeling_conditional_detr.ConditionalDetrSegmentationOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConditionalDetrConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized bounding boxes. pred_masks (torch.FloatTensor of shape (batch_size, num_queries, height/4, width/4)) — Segmentation masks logits for all queries. See also post_process_semantic_segmentation() or post_process_instance_segmentation() post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic segmentation masks respectively. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The ConditionalDetrForSegmentation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import io >>> import requests >>> from PIL import Image >>> import torch >>> import numpy >>> from transformers import ( ... AutoImageProcessor, ... ConditionalDetrConfig, ... ConditionalDetrForSegmentation, ... ) >>> from transformers.image_transforms import rgb_to_id >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/conditional-detr-resnet-50") >>> >>> config = ConditionalDetrConfig() >>> model = ConditionalDetrForSegmentation(config) >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> >>> outputs = model(**inputs) >>> >>> >>> result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[(300, 500)]) >>> >>> panoptic_seg = result[0]["segmentation"] >>> >>> panoptic_segments_info = result[0]["segments_info"]
https://huggingface.co/docs/transformers/model_doc/convnextv2
ConvNeXt V2 Overview The ConvNeXt V2 model was proposed in ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders by Sanghyun Woo, Shoubhik Debnath, Ronghang Hu, Xinlei Chen, Zhuang Liu, In So Kweon, Saining Xie. ConvNeXt V2 is a pure convolutional model (ConvNet), inspired by the design of Vision Transformers, and a successor of ConvNeXT. The abstract from the paper is the following: Driven by improved architectures and better representation learning frameworks, the field of visual recognition has enjoyed rapid modernization and performance boost in the early 2020s. For example, modern ConvNets, represented by ConvNeXt, have demonstrated strong performance in various scenarios. While these models were originally designed for supervised learning with ImageNet labels, they can also potentially benefit from self-supervised learning techniques such as masked autoencoders (MAE). However, we found that simply combining these two approaches leads to subpar performance. In this paper, we propose a fully convolutional masked autoencoder framework and a new Global Response Normalization (GRN) layer that can be added to the ConvNeXt architecture to enhance inter-channel feature competition. This co-design of self-supervised learning techniques and architectural improvement results in a new model family called ConvNeXt V2, which significantly improves the performance of pure ConvNets on various recognition benchmarks, including ImageNet classification, COCO detection, and ADE20K segmentation. We also provide pre-trained ConvNeXt V2 models of various sizes, ranging from an efficient 3.7M-parameter Atto model with 76.7% top-1 accuracy on ImageNet, to a 650M Huge model that achieves a state-of-the-art 88.9% accuracy using only public training data. Tips: See the code examples below each model regarding usage. ConvNeXt V2 architecture. Taken from the original paper. This model was contributed by adirik. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with ConvNeXt V2. Image Classification ConvNextV2ForImageClassification is supported by this example script and notebook. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. ConvNextV2Config class transformers.ConvNextV2Config < source > ( num_channels = 3 patch_size = 4 num_stages = 4 hidden_sizes = None depths = None hidden_act = 'gelu' initializer_range = 0.02 layer_norm_eps = 1e-12 drop_path_rate = 0.0 image_size = 224 out_features = None out_indices = None **kwargs ) Parameters num_channels (int, optional, defaults to 3) — The number of input channels. patch_size (int, optional, defaults to 4) — Patch size to use in the patch embedding layer. num_stages (int, optional, defaults to 4) — The number of stages in the model. hidden_sizes (List[int], optional, defaults to [96, 192, 384, 768]) — Dimensionality (hidden size) at each stage. depths (List[int], optional, defaults to [3, 3, 9, 3]) — Depth (number of blocks) for each stage. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in each block. If string, "gelu", "relu", "selu" and "gelu_new" are supported. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. drop_path_rate (float, optional, defaults to 0.0) — The drop rate for stochastic depth. out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. This is the configuration class to store the configuration of a ConvNextV2Model. It is used to instantiate an ConvNeXTV2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ConvNeXTV2 facebook/convnextv2-tiny-1k-224 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import ConvNeXTV2Config, ConvNextV2Model >>> >>> configuration = ConvNeXTV2Config() >>> >>> model = ConvNextV2Model(configuration) >>> >>> configuration = model.config ConvNextV2Model class transformers.ConvNextV2Model < source > ( config ) Parameters config (ConvNextV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare ConvNextV2 model outputting raw features without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using ConvNextImageProcessor. See ConvNextImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or tuple(torch.FloatTensor) A transformers.modeling_outputs.BaseModelOutputWithPoolingAndNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvNextV2Config) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state after a pooling operation on the spatial dimensions. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, num_channels, height, width). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. The ConvNextV2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, ConvNextV2Model >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-1k-224") >>> model = ConvNextV2Model.from_pretrained("facebook/convnextv2-tiny-1k-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 768, 7, 7] ConvNextV2ForImageClassification class transformers.ConvNextV2ForImageClassification < source > ( config ) Parameters config (ConvNextV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. ConvNextV2 Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor = None labels: typing.Optional[torch.LongTensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using ConvNextImageProcessor. See ConvNextImageProcessor.call() for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (ConvNextV2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. The ConvNextV2ForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, ConvNextV2ForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/convnextv2-tiny-1k-224") >>> model = ConvNextV2ForImageClassification.from_pretrained("facebook/convnextv2-tiny-1k-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat
https://huggingface.co/docs/transformers/model_doc/cpmant
CPMAnt Overview CPM-Ant is an open-source Chinese pre-trained language model (PLM) with 10B parameters. It is also the first milestone of the live training process of CPM-Live. The training process is cost-effective and environment-friendly. CPM-Ant also achieves promising results with delta tuning on the CUGE benchmark. Besides the full model, we also provide various compressed versions to meet the requirements of different hardware configurations. See more Tips: This model was contributed by OpenBMB. The original code can be found here. ⚙️ Training & Inference A tutorial on CPM-Live. CpmAntConfig class transformers.CpmAntConfig < source > ( vocab_size: int = 30720 hidden_size: int = 4096 num_attention_heads: int = 32 dim_head: int = 128 dim_ff: int = 10240 num_hidden_layers: int = 48 dropout_p: int = 0.0 position_bias_num_buckets: int = 512 position_bias_max_distance: int = 2048 eps: int = 1e-06 init_std: float = 1.0 prompt_types: int = 32 prompt_length: int = 32 segment_types: int = 32 use_cache: bool = True **kwargs ) Parameters vocab_size (int, optional, defaults to 30720) — Vocabulary size of the CPMAnt model. Defines the number of different tokens that can be represented by the input passed when calling CpmAntModel. hidden_size (int, optional, defaults to 4096) — Dimension of the encoder layers. num_attention_heads (int, optional, defaults to 32) — Number of attention heads in the Transformer encoder. dim_head (int, optional, defaults to 128) — Dimension of attention heads for each attention layer in the Transformer encoder. dim_ff (int, optional, defaults to 10240) — Dimension of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 48) — Number of layers of the Transformer encoder. dropout_p (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder. position_bias_num_buckets (int, optional, defaults to 512) — The number of position_bias buckets. position_bias_max_distance (int, optional, defaults to 2048) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). eps (float, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers. prompt_types (int, optional, defaults to 32) — The type of prompt. prompt_length (int, optional, defaults to 32) — The length of prompt. segment_types (int, optional, defaults to 32) — The type of segment. use_cache (bool, optional, defaults to True) — Whether to use cache. init_std (float, optional, defaults to 1.0) — Initialize parameters with std = init_std. This is the configuration class to store the configuration of a CpmAntModel. It is used to instantiate an CPMAnt model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CPMAnt openbmb/cpm-ant-10b architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CpmAntModel, CpmAntConfig >>> >>> configuration = CpmAntConfig() >>> >>> model = CpmAntModel(configuration) >>> >>> configuration = model.config CpmAntTokenizer class transformers.CpmAntTokenizer < source > ( vocab_file bod_token = '<d>' eod_token = '</d>' bos_token = '<s>' eos_token = '</s>' pad_token = '<pad>' unk_token = '<unk>' line_token = '</n>' space_token = '</_>' padding_side = 'left' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. bod_token (str, optional, defaults to "<d>") — The beginning of document token. eod_token (str, optional, defaults to "</d>") — The end of document token. bos_token (str, optional, defaults to "<s>") — The beginning of sequence token. eos_token (str, optional, defaults to "</s>") — The end of sequence token. pad_token (str, optional, defaults to "<pad>") — The token used for padding. unk_token (str, optional, defaults to "<unk>") — The unknown token. line_token (str, optional, defaults to "</n>") — The line token. space_token (str, optional, defaults to "</_>") — The space token. Construct a CPMAnt tokenizer. Based on byte-level Byte-Pair-Encoding. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.List[int] = None ) → List[int] Parameters token_ids_0 (List[int]) — The first tokenized sequence that special tokens will be added. token_ids_1 (List[int]) — The optional second tokenized sequence that special tokens will be added. The model input with special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A CPMAnt sequence has the following format: single sequence: [BOS] Sequence. get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. CpmAntModel class transformers.CpmAntModel < source > ( config: CpmAntConfig ) The bare CPMAnt Model outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters config (~CpmAntConfig): Model configuration class with all the parameters of the Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None use_cache: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None **kwargs ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.Tensor of shape (batch_size, seq_len)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using CPMAntTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CpmAntConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CpmAntModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CpmAntModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-ant-10b") >>> model = CpmAntModel.from_pretrained("openbmb/cpm-ant-10b") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state CpmAntForCausalLM class transformers.CpmAntForCausalLM < source > ( config: CpmAntConfig ) The CPMAnt Model with a language modeling head on top (linear layer with weights tied to the input embeddings). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. Parameters config (~CpmAntConfig): Model configuration class with all the parameters of the Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Union[typing.List[typing.Tuple[torch.Tensor, torch.Tensor]], NoneType] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None return_dict: typing.Optional[bool] = None attention_mask: typing.Optional[torch.Tensor] = None **kwargs ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.Tensor of shape (batch_size, seq_len)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using CPMAntTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Args — input_ids (torch.Tensor of shape (batch_size, seq_len)): Indices of input sequence tokens in the vocabulary. Indices can be obtained using CPMAntTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True): Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. use_cache (bool, optional): If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional): Whether or not to return the attentions tensors of all attention layers. output_hidden_states (bool, optional): Whether or not to return the hidden states of all layers. labels (torch.Tensor of shape (batch_size, sequence_length), optional): Labels for computing the masked language modeling loss. return_dict (bool, optional): Whether or not to return a ModelOutput instead of a plain tuple. attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional): CPMAnt will process attention mask automatically, this parameter is a dummy parameter for text-generation pipeline. Example — Text Generation with CpmAntForCausalLM. — A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CpmAntConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CpmAntForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, CpmAntForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-ant-10b") >>> model = CpmAntForCausalLM.from_pretrained("openbmb/cpm-ant-10b") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/cpm
CPM Overview The CPM model was proposed in CPM: A Large-scale Generative Chinese Pre-trained Language Model by Zhengyan Zhang, Xu Han, Hao Zhou, Pei Ke, Yuxian Gu, Deming Ye, Yujia Qin, Yusheng Su, Haozhe Ji, Jian Guan, Fanchao Qi, Xiaozhi Wang, Yanan Zheng, Guoyang Zeng, Huanqi Cao, Shengqi Chen, Daixuan Li, Zhenbo Sun, Zhiyuan Liu, Minlie Huang, Wentao Han, Jie Tang, Juanzi Li, Xiaoyan Zhu, Maosong Sun. The abstract from the paper is the following: Pre-trained Language Models (PLMs) have proven to be beneficial for various downstream NLP tasks. Recently, GPT-3, with 175 billion parameters and 570GB training data, drew a lot of attention due to the capacity of few-shot (even zero-shot) learning. However, applying GPT-3 to address Chinese NLP tasks is still challenging, as the training corpus of GPT-3 is primarily English, and the parameters are not publicly available. In this technical report, we release the Chinese Pre-trained Language Model (CPM) with generative pre-training on large-scale Chinese training data. To the best of our knowledge, CPM, with 2.6 billion parameters and 100GB Chinese training data, is the largest Chinese pre-trained language model, which could facilitate several downstream Chinese NLP tasks, such as conversation, essay generation, cloze test, and language understanding. Extensive experiments demonstrate that CPM achieves strong performance on many NLP tasks in the settings of few-shot (even zero-shot) learning. This model was contributed by canwenxu. The original implementation can be found here: https://github.com/TsinghuaAI/CPM-Generate Note: We only have a tokenizer here, since the model architecture is the same as GPT-2. CpmTokenizer class transformers.CpmTokenizer < source > ( vocab_file do_lower_case = False remove_space = True keep_accents = False bos_token = '<s>' eos_token = '</s>' unk_token = '<unk>' sep_token = '<sep>' pad_token = '<pad>' cls_token = '<cls>' mask_token = '<mask>' additional_special_tokens = ['<eop>', '<eod>'] sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format: single sequence: X <sep> <cls> pair of sequences: A <sep> B <sep> <cls> Converts a sequence of tokens (strings for sub-words) in a single string. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. CpmTokenizerFast class transformers.CpmTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = False remove_space = True keep_accents = False bos_token = '<s>' eos_token = '</s>' unk_token = '<unk>' sep_token = '<sep>' pad_token = '<pad>' cls_token = '<cls>' mask_token = '<mask>' additional_special_tokens = ['<eop>', '<eod>'] **kwargs ) Runs pre-tokenization with Jieba segmentation tool. It is used in CPM models. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLNet sequence has the following format: single sequence: X <sep> <cls> pair of sequences: A <sep> B <sep> <cls> create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLNet sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s).
https://huggingface.co/docs/transformers/model_doc/ctrl
CTRL Overview CTRL model was proposed in CTRL: A Conditional Transformer Language Model for Controllable Generation by Nitish Shirish Keskar, Bryan McCann, Lav R. Varshney, Caiming Xiong and Richard Socher. It’s a causal (unidirectional) transformer pre-trained using language modeling on a very large corpus of ~140 GB of text data with the first token reserved as a control code (such as Links, Books, Wikipedia etc.). The abstract from the paper is the following: Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution. Tips: CTRL makes use of control codes to generate text: it requires generations to be started by certain words, sentences or links to generate coherent text. Refer to the original implementation for more information. CTRL is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. CTRL was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows CTRL to generate syntactically coherent text as it can be observed in the run_generation.py example script. The PyTorch models can take the past_key_values as input, which is the previously computed key/value attention pairs. TensorFlow models accepts past as input. Using the past_key_values value prevents the model from re-computing pre-computed values in the context of text generation. See the forward method for more information on the usage of this argument. This model was contributed by keskarnitishr. The original code can be found here. Documentation resources Text classification task guide Causal language modeling task guide CTRLConfig class transformers.CTRLConfig < source > ( vocab_size = 246534 n_positions = 256 n_embd = 1280 dff = 8192 n_layer = 48 n_head = 16 resid_pdrop = 0.1 embd_pdrop = 0.1 layer_norm_epsilon = 1e-06 initializer_range = 0.02 use_cache = True **kwargs ) Parameters vocab_size (int, optional, defaults to 246534) — Vocabulary size of the CTRL model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling CTRLModel or TFCTRLModel. n_positions (int, optional, defaults to 256) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (int, optional, defaults to 1280) — Dimensionality of the embeddings and hidden states. dff (int, optional, defaults to 8192) — Dimensionality of the inner dimension of the feed forward networks (FFN). n_layer (int, optional, defaults to 48) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (int, optional, defaults to 0.1) — The dropout ratio for the embeddings. layer_norm_epsilon (float, optional, defaults to 1e-6) — The epsilon to use in the layer normalization layers initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). This is the configuration class to store the configuration of a CTRLModel or a TFCTRLModel. It is used to instantiate a CTRL model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Salesforce/ctrl architecture from SalesForce. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import CTRLConfig, CTRLModel >>> >>> configuration = CTRLConfig() >>> >>> model = CTRLModel(configuration) >>> >>> configuration = model.config CTRLTokenizer class transformers.CTRLTokenizer < source > ( vocab_file merges_file unk_token = '<unk>' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. unk_token (str, optional, defaults to "<unk>") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. Construct a CTRL tokenizer. Based on Byte-Pair-Encoding. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) CTRLModel class transformers.CTRLModel < source > ( config ) Parameters config (CTRLConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare CTRL Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.FloatTensor]] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CTRLConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CTRLModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, CTRLModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = CTRLModel.from_pretrained("Salesforce/ctrl") >>> >>> inputs = tokenizer("Opinion My dog is cute", return_tensors="pt") >>> assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values() >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 5, 1280] CTRLLMHeadModel class transformers.CTRLLMHeadModel < source > ( config ) Parameters config (CTRLConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The CTRL Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithPast or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.FloatTensor]] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithPast or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CTRLConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) Contains pre-computed hidden-states (key and values in the self-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CTRLLMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, CTRLLMHeadModel >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = CTRLLMHeadModel.from_pretrained("Salesforce/ctrl") >>> >>> inputs = tokenizer("Wikipedia The llama is", return_tensors="pt") >>> assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values() >>> sequence_ids = model.generate(inputs["input_ids"]) >>> sequences = tokenizer.batch_decode(sequence_ids) >>> sequences ['Wikipedia The llama is a member of the family Bovidae. It is native to the Andes of Peru,'] >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> round(outputs.loss.item(), 2) 9.21 >>> list(outputs.logits.shape) [1, 5, 246534] CTRLForSequenceClassification class transformers.CTRLForSequenceClassification < source > ( config ) Parameters config (CTRLConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The CTRL Model transformer with a sequence classification head on top (linear layer). CTRLForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past_key_values (Tuple[Tuple[torch.FloatTensor]] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CTRLConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The CTRLForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, CTRLForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = CTRLForSequenceClassification.from_pretrained("Salesforce/ctrl") >>> >>> inputs = tokenizer("Opinion My dog is cute", return_tensors="pt") >>> assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values() >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'LABEL_0' >>> import torch >>> torch.manual_seed(42) >>> >>> num_labels = len(model.config.id2label) >>> model = CTRLForSequenceClassification.from_pretrained("Salesforce/ctrl", num_labels=num_labels) >>> labels = torch.tensor(1) >>> loss = model(**inputs, labels=labels).loss >>> round(loss.item(), 2) 0.35 Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, CTRLForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = CTRLForSequenceClassification.from_pretrained( ... "Salesforce/ctrl", problem_type="multi_label_classification" ... ) >>> >>> inputs = tokenizer("Opinion My dog is cute", return_tensors="pt") >>> assert inputs["input_ids"][0, 0].item() in tokenizer.control_codes.values() >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> model.config.id2label[predicted_class_id] 'LABEL_0' >>> >>> num_labels = len(model.config.id2label) >>> model = CTRLForSequenceClassification.from_pretrained("Salesforce/ctrl", num_labels=num_labels) >>> num_labels = len(model.config.id2label) >>> labels = torch.nn.functional.one_hot(torch.tensor([predicted_class_id]), num_classes=num_labels).to( ... torch.float ... ) >>> loss = model(**inputs, labels=labels).loss >>> loss.backward() TFCTRLModel class transformers.TFCTRLModel < source > ( *args **kwargs ) Parameters config (CTRLConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare CTRL Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. use_cache (bool, optional) — If set to True, past key value states are returned and can be used to speed up decoding (see past). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutputWithPast or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CTRLConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCTRLModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCTRLModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = TFCTRLModel.from_pretrained("Salesforce/ctrl") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFCTRLLMHeadModel class transformers.TFCTRLLMHeadModel < source > ( *args **kwargs ) Parameters config (CTRLConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The CTRL Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. use_cache (bool, optional) — If set to True, past key value states are returned and can be used to speed up decoding (see past). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFCausalLMOutputWithPast or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CTRLConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). past_key_values (List[tf.Tensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of tf.Tensor of length config.n_layers, with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)). Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCTRLLMHeadModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCTRLLMHeadModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = TFCTRLLMHeadModel.from_pretrained("Salesforce/ctrl") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits TFCTRLForSequenceClassification class transformers.TFCTRLForSequenceClassification < source > ( *args **kwargs ) Parameters config (CTRLConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The CTRL Model transformer with a sequence classification head on top (linear layer). TFCTRLForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1, GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None past_key_values: Optional[Tuple[Tuple[Union[np.ndarray, tf.Tensor]]]] = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None use_cache: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past is None else past[0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past is used, only input IDs that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? past (List[tf.Tensor] of length config.n_layers) — Contains pre-computed hidden-states (key and values in the attention blocks) as computed by the model (see past output below). Can be used to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed. attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. use_cache (bool, optional) — If set to True, past key value states are returned and can be used to speed up decoding (see past). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the cross entropy classification loss. Indices should be in [0, ..., config.vocab_size - 1]. A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CTRLConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFCTRLForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFCTRLForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("Salesforce/ctrl") >>> model = TFCTRLForSequenceClassification.from_pretrained("Salesforce/ctrl") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFCTRLForSequenceClassification.from_pretrained("Salesforce/ctrl", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss
https://huggingface.co/docs/transformers/model_doc/cvt
Convolutional Vision Transformer (CvT) Overview The CvT model was proposed in CvT: Introducing Convolutions to Vision Transformers by Haiping Wu, Bin Xiao, Noel Codella, Mengchen Liu, Xiyang Dai, Lu Yuan and Lei Zhang. The Convolutional vision Transformer (CvT) improves the Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. The abstract from the paper is the following: We present in this paper a new architecture, named Convolutional vision Transformer (CvT), that improves Vision Transformer (ViT) in performance and efficiency by introducing convolutions into ViT to yield the best of both designs. This is accomplished through two primary modifications: a hierarchy of Transformers containing a new convolutional token embedding, and a convolutional Transformer block leveraging a convolutional projection. These changes introduce desirable properties of convolutional neural networks (CNNs) to the ViT architecture (\ie shift, scale, and distortion invariance) while maintaining the merits of Transformers (\ie dynamic attention, global context, and better generalization). We validate CvT by conducting extensive experiments, showing that this approach achieves state-of-the-art performance over other Vision Transformers and ResNets on ImageNet-1k, with fewer parameters and lower FLOPs. In addition, performance gains are maintained when pretrained on larger datasets (\eg ImageNet-22k) and fine-tuned to downstream tasks. Pre-trained on ImageNet-22k, our CvT-W24 obtains a top-1 accuracy of 87.7\% on the ImageNet-1k val set. Finally, our results show that the positional encoding, a crucial component in existing Vision Transformers, can be safely removed in our model, simplifying the design for higher resolution vision tasks. Tips: CvT models are regular Vision Transformers, but trained with convolutions. They outperform the original model (ViT) when fine-tuned on ImageNet-1K and CIFAR-100. You can check out demo notebooks regarding inference as well as fine-tuning on custom data here (you can just replace ViTFeatureExtractor by AutoImageProcessor and ViTForImageClassification by CvtForImageClassification). The available checkpoints are either (1) pre-trained on ImageNet-22k (a collection of 14 million images and 22k classes) only, (2) also fine-tuned on ImageNet-22k or (3) also fine-tuned on ImageNet-1k (also referred to as ILSVRC 2012, a collection of 1.3 million images and 1,000 classes). This model was contributed by anugunj. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with CvT. Image Classification CvtForImageClassification is supported by this example script and notebook. See also: Image classification task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. CvtConfig class transformers.CvtConfig < source > ( num_channels = 3 patch_sizes = [7, 3, 3] patch_stride = [4, 2, 2] patch_padding = [2, 1, 1] embed_dim = [64, 192, 384] num_heads = [1, 3, 6] depth = [1, 2, 10] mlp_ratio = [4.0, 4.0, 4.0] attention_drop_rate = [0.0, 0.0, 0.0] drop_rate = [0.0, 0.0, 0.0] drop_path_rate = [0.0, 0.0, 0.1] qkv_bias = [True, True, True] cls_token = [False, False, True] qkv_projection_method = ['dw_bn', 'dw_bn', 'dw_bn'] kernel_qkv = [3, 3, 3] padding_kv = [1, 1, 1] stride_kv = [2, 2, 2] padding_q = [1, 1, 1] stride_q = [1, 1, 1] initializer_range = 0.02 layer_norm_eps = 1e-12 **kwargs ) Parameters num_channels (int, optional, defaults to 3) — The number of input channels. patch_sizes (List[int], optional, defaults to [7, 3, 3]) — The kernel size of each encoder’s patch embedding. patch_stride (List[int], optional, defaults to [4, 2, 2]) — The stride size of each encoder’s patch embedding. patch_padding (List[int], optional, defaults to [2, 1, 1]) — The padding size of each encoder’s patch embedding. embed_dim (List[int], optional, defaults to [64, 192, 384]) — Dimension of each of the encoder blocks. num_heads (List[int], optional, defaults to [1, 3, 6]) — Number of attention heads for each attention layer in each block of the Transformer encoder. depth (List[int], optional, defaults to [1, 2, 10]) — The number of layers in each encoder block. mlp_ratios (List[float], optional, defaults to [4.0, 4.0, 4.0, 4.0]) — Ratio of the size of the hidden layer compared to the size of the input layer of the Mix FFNs in the encoder blocks. attention_drop_rate (List[float], optional, defaults to [0.0, 0.0, 0.0]) — The dropout ratio for the attention probabilities. drop_rate (List[float], optional, defaults to [0.0, 0.0, 0.0]) — The dropout ratio for the patch embeddings probabilities. drop_path_rate (List[float], optional, defaults to [0.0, 0.0, 0.1]) — The dropout probability for stochastic depth, used in the blocks of the Transformer encoder. qkv_bias (List[bool], optional, defaults to [True, True, True]) — The bias bool for query, key and value in attentions cls_token (List[bool], optional, defaults to [False, False, True]) — Whether or not to add a classification token to the output of each of the last 3 stages. qkv_projection_method (List[string], optional, defaults to [“dw_bn”, “dw_bn”, “dw_bn”]`) — The projection method for query, key and value Default is depth-wise convolutions with batch norm. For Linear projection use “avg”. kernel_qkv (List[int], optional, defaults to [3, 3, 3]) — The kernel size for query, key and value in attention layer padding_kv (List[int], optional, defaults to [1, 1, 1]) — The padding size for key and value in attention layer stride_kv (List[int], optional, defaults to [2, 2, 2]) — The stride size for key and value in attention layer padding_q (List[int], optional, defaults to [1, 1, 1]) — The padding size for query in attention layer stride_q (List[int], optional, defaults to [1, 1, 1]) — The stride size for query in attention layer initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers. This is the configuration class to store the configuration of a CvtModel. It is used to instantiate a CvT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the CvT microsoft/cvt-13 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import CvtConfig, CvtModel >>> >>> configuration = CvtConfig() >>> >>> model = CvtModel(configuration) >>> >>> configuration = model.config CvtModel class transformers.CvtModel < source > ( config add_pooling_layer = True ) Parameters config (CvtConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Cvt Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.cvt.modeling_cvt.BaseModelOutputWithCLSToken or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__ for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.cvt.modeling_cvt.BaseModelOutputWithCLSToken or tuple(torch.FloatTensor) A transformers.models.cvt.modeling_cvt.BaseModelOutputWithCLSToken or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CvtConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. cls_token_value (torch.FloatTensor of shape (batch_size, 1, hidden_size)) — Classification token at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. The CvtModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, CvtModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13") >>> model = CvtModel.from_pretrained("microsoft/cvt-13") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 384, 14, 14] CvtForImageClassification class transformers.CvtForImageClassification < source > ( config ) Parameters config (CvtConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__ for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutputWithNoAttention or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CvtConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. The CvtForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, CvtForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13") >>> model = CvtForImageClassification.from_pretrained("microsoft/cvt-13") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat TFCvtModel class transformers.TFCvtModel < source > ( *args **kwargs ) Parameters config (CvtConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Cvt Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TF 2.0 models accepts two formats as inputs: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using tf.keras.Model.fit method which currently requires having all the tensors in the first argument of the model call function: model(inputs). call < source > ( pixel_values: tf.Tensor | None = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.models.cvt.modeling_tf_cvt.TFBaseModelOutputWithCLSToken or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__ for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns transformers.models.cvt.modeling_tf_cvt.TFBaseModelOutputWithCLSToken or tuple(tf.Tensor) A transformers.models.cvt.modeling_tf_cvt.TFBaseModelOutputWithCLSToken or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CvtConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. cls_token_value (tf.Tensor of shape (batch_size, 1, hidden_size)) — Classification token at the output of the last layer of the model. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. The TFCvtModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFCvtModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13") >>> model = TFCvtModel.from_pretrained("microsoft/cvt-13") >>> inputs = image_processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state TFCvtForImageClassification class transformers.TFCvtForImageClassification < source > ( *args **kwargs ) Parameters config (CvtConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Cvt Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TF 2.0 models accepts two formats as inputs: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using tf.keras.Model.fit method which currently requires having all the tensors in the first argument of the model call function: model(inputs). call < source > ( pixel_values: tf.Tensor | None = None labels: tf.Tensor | None = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CvtImageProcessor.__call__ for details. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or tuple(tf.Tensor) A transformers.modeling_tf_outputs.TFImageClassifierOutputWithNoAttention or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (CvtConfig) and inputs. loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, num_channels, height, width). Hidden-states (also called feature maps) of the model at the output of each stage. The TFCvtForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFCvtForImageClassification >>> import tensorflow as tf >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("microsoft/cvt-13") >>> model = TFCvtForImageClassification.from_pretrained("microsoft/cvt-13") >>> inputs = image_processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> >>> predicted_class_idx = tf.math.argmax(logits, axis=-1)[0] >>> print("Predicted class:", model.config.id2label[int(predicted_class_idx)])
https://huggingface.co/docs/transformers/model_doc/decision_transformer
Decision Transformer Overview The Decision Transformer model was proposed in Decision Transformer: Reinforcement Learning via Sequence Modeling by Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, Pieter Abbeel, Aravind Srinivas, Igor Mordatch. The abstract from the paper is the following: We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks. Tips: This version of the model is for tasks where the state is a vector, image-based states will come soon. This model was contributed by edbeeching. The original code can be found here. DecisionTransformerConfig class transformers.DecisionTransformerConfig < source > ( state_dim = 17 act_dim = 4 hidden_size = 128 max_ep_len = 4096 action_tanh = True vocab_size = 1 n_positions = 1024 n_layer = 3 n_head = 1 n_inner = None activation_function = 'relu' resid_pdrop = 0.1 embd_pdrop = 0.1 attn_pdrop = 0.1 layer_norm_epsilon = 1e-05 initializer_range = 0.02 scale_attn_weights = True use_cache = True bos_token_id = 50256 eos_token_id = 50256 scale_attn_by_inverse_layer_idx = False reorder_and_upcast_attn = False **kwargs ) Parameters state_dim (int, optional, defaults to 17) — The state size for the RL environment act_dim (int, optional, defaults to 4) — The size of the output action space hidden_size (int, optional, defaults to 128) — The size of the hidden layers max_ep_len (int, optional, defaults to 4096) — The maximum length of an episode in the environment action_tanh (bool, optional, defaults to True) — Whether to use a tanh activation on action prediction vocab_size (int, optional, defaults to 50257) — Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling DecisionTransformerModel. n_positions (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_layer (int, optional, defaults to 3) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 1) — Number of attention heads for each attention layer in the Transformer encoder. n_inner (int, optional) — Dimensionality of the inner feed-forward layers. If unset, will default to 4 times n_embd. activation_function (str, optional, defaults to "gelu") — Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new"]. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (int, optional, defaults to 0.1) — The dropout ratio for the embeddings. attn_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the attention. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. scale_attn_weights (bool, optional, defaults to True) — Scale attention weights by dividing by sqrt(hidden_size).. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). scale_attn_by_inverse_layer_idx (bool, optional, defaults to False) — Whether to additionally scale attention weights by 1 / layer_idx + 1. reorder_and_upcast_attn (bool, optional, defaults to False) — Whether to scale keys (K) prior to computing attention (dot-product) and upcast attention dot-product/softmax to float() when training with mixed precision. This is the configuration class to store the configuration of a DecisionTransformerModel. It is used to instantiate a Decision Transformer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the standard DecisionTransformer architecture. Many of the config options are used to instatiate the GPT2 model that is used as part of the architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DecisionTransformerConfig, DecisionTransformerModel >>> >>> configuration = DecisionTransformerConfig() >>> >>> model = DecisionTransformerModel(configuration) >>> >>> configuration = model.config DecisionTransformerGPT2Model class transformers.DecisionTransformerGPT2Model < source > ( config ) forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) DecisionTransformerModel class transformers.DecisionTransformerModel < source > ( config ) Parameters config (~DecisionTransformerConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The Decision Transformer Model This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model builds upon the GPT2 architecture to perform autoregressive prediction of actions in an offline RL setting. Refer to the paper for more details: https://arxiv.org/abs/2106.01345 forward < source > ( states: typing.Optional[torch.FloatTensor] = None actions: typing.Optional[torch.FloatTensor] = None rewards: typing.Optional[torch.FloatTensor] = None returns_to_go: typing.Optional[torch.FloatTensor] = None timesteps: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None output_hidden_states: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.decision_transformer.modeling_decision_transformer.DecisionTransformerOutput or tuple(torch.FloatTensor) Parameters states (torch.FloatTensor of shape (batch_size, episode_length, state_dim)) — The states for each step in the trajectory actions (torch.FloatTensor of shape (batch_size, episode_length, act_dim)) — The actions taken by the “expert” policy for the current state, these are masked for auto regressive prediction rewards (torch.FloatTensor of shape (batch_size, episode_length, 1)) — The rewards for each state, action returns_to_go (torch.FloatTensor of shape (batch_size, episode_length, 1)) — The returns for each state in the trajectory timesteps (torch.LongTensor of shape (batch_size, episode_length)) — The timestep for each step in the trajectory attention_mask (torch.FloatTensor of shape (batch_size, episode_length)) — Masking, used to mask the actions when performing autoregressive prediction Returns transformers.models.decision_transformer.modeling_decision_transformer.DecisionTransformerOutput or tuple(torch.FloatTensor) A transformers.models.decision_transformer.modeling_decision_transformer.DecisionTransformerOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DecisionTransformerConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. state_preds (torch.FloatTensor of shape (batch_size, sequence_length, state_dim)) — Environment state predictions action_preds (torch.FloatTensor of shape (batch_size, sequence_length, action_dim)) — Model action predictions return_preds (torch.FloatTensor of shape (batch_size, sequence_length, 1)) — Predicted returns for each state hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DecisionTransformerModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import DecisionTransformerModel >>> import torch >>> model = DecisionTransformerModel.from_pretrained("edbeeching/decision-transformer-gym-hopper-medium") >>> >>> model = model.to(device) >>> model.eval() >>> env = gym.make("Hopper-v3") >>> state_dim = env.observation_space.shape[0] >>> act_dim = env.action_space.shape[0] >>> state = env.reset() >>> states = torch.from_numpy(state).reshape(1, 1, state_dim).to(device=device, dtype=torch.float32) >>> actions = torch.zeros((1, 1, act_dim), device=device, dtype=torch.float32) >>> rewards = torch.zeros(1, 1, device=device, dtype=torch.float32) >>> target_return = torch.tensor(TARGET_RETURN, dtype=torch.float32).reshape(1, 1) >>> timesteps = torch.tensor(0, device=device, dtype=torch.long).reshape(1, 1) >>> attention_mask = torch.zeros(1, 1, device=device, dtype=torch.float32) >>> >>> with torch.no_grad(): ... state_preds, action_preds, return_preds = model( ... states=states, ... actions=actions, ... rewards=rewards, ... returns_to_go=target_return, ... timesteps=timesteps, ... attention_mask=attention_mask, ... return_dict=False, ... )
https://huggingface.co/docs/transformers/model_doc/deberta-v2
DeBERTa-v2 Overview The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s BERT model released in 2018 and Facebook’s RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa. The following information is visible directly on the original implementation repository. DeBERTa v2 is the second version of the DeBERTa model. It includes the 1.5B model used for the SuperGLUE single-model submission and achieving 89.9, versus human baseline 89.8. You can find more details about this submission in the authors’ blog New in v2: Vocabulary In v2 the tokenizer is changed to use a new vocabulary of size 128K built from the training data. Instead of a GPT2-based tokenizer, the tokenizer is now sentencepiece-based tokenizer. nGiE(nGram Induced Input Encoding) The DeBERTa-v2 model uses an additional convolution layer aside with the first transformer layer to better learn the local dependency of input tokens. Sharing position projection matrix with content projection matrix in attention layer Based on previous experiments, this can save parameters without affecting the performance. Apply bucket to encode relative positions The DeBERTa-v2 model uses log bucket to encode relative positions similar to T5. 900M model & 1.5B model Two additional model sizes are available: 900M and 1.5B, which significantly improves the performance of downstream tasks. This model was contributed by DeBERTa. This model TF 2.0 implementation was contributed by kamalkraj. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide DebertaV2Config class transformers.DebertaV2Config < source > ( vocab_size = 128100 hidden_size = 1536 num_hidden_layers = 24 num_attention_heads = 24 intermediate_size = 6144 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 0 initializer_range = 0.02 layer_norm_eps = 1e-07 relative_attention = False max_relative_positions = -1 pad_token_id = 0 position_biased_input = True pos_att_type = None pooler_dropout = 0 pooler_hidden_act = 'gelu' **kwargs ) Parameters vocab_size (int, optional, defaults to 128100) — Vocabulary size of the DeBERTa-v2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling DebertaV2Model. hidden_size (int, optional, defaults to 1536) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 24) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 24) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 6144) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu", "gelu", "tanh", "gelu_fast", "mish", "linear", "sigmoid" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 0) — The vocabulary size of the token_type_ids passed when calling DebertaModel or TFDebertaModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-7) — The epsilon used by the layer normalization layers. relative_attention (bool, optional, defaults to True) — Whether use relative position encoding. max_relative_positions (int, optional, defaults to -1) — The range of relative positions [-max_position_embeddings, max_position_embeddings]. Use the same value as max_position_embeddings. pad_token_id (int, optional, defaults to 0) — The value used to pad input_ids. position_biased_input (bool, optional, defaults to False) — Whether add absolute position embedding to content embedding. pos_att_type (List[str], optional) — The type of relative position attention, it can be a combination of ["p2c", "c2p"], e.g. ["p2c"], ["p2c", "c2p"], ["p2c", "c2p"]. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. This is the configuration class to store the configuration of a DebertaV2Model. It is used to instantiate a DeBERTa-v2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa microsoft/deberta-v2-xlarge architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DebertaV2Config, DebertaV2Model >>> >>> configuration = DebertaV2Config() >>> >>> model = DebertaV2Model(configuration) >>> >>> configuration = model.config DebertaV2Tokenizer class transformers.DebertaV2Tokenizer < source > ( vocab_file do_lower_case = False split_by_punct = False bos_token = '[CLS]' eos_token = '[SEP]' unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing. bos_token (string, optional, defaults to "[CLS]") — The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (string, optional, defaults to "[SEP]") — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set: enable_sampling: Enable subword regularization. nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout. nbest_size = {0,1}: No sampling is performed. nbest_size > 1: samples from the nbest_size results. nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Constructs a DeBERTa-v2 tokenizer. Based on SentencePiece. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A DeBERTa sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0 token_ids_1 = None already_has_special_tokens = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods. create_token_type_ids_from_sequences < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) DebertaV2TokenizerFast class transformers.DebertaV2TokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = False split_by_punct = False bos_token = '[CLS]' eos_token = '[SEP]' unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing. bos_token (string, optional, defaults to "[CLS]") — The beginning of sequence token that was used during pre-training. Can be used a sequence classifier token. When building a sequence using special tokens, this is not the token that is used for the beginning of sequence. The token used is the cls_token. eos_token (string, optional, defaults to "[SEP]") — The end of sequence token. When building a sequence using special tokens, this is not the token that is used for the end of sequence. The token used is the sep_token. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set: enable_sampling: Enable subword regularization. nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout. nbest_size = {0,1}: No sampling is performed. nbest_size > 1: samples from the nbest_size results. nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. Constructs a DeBERTa-v2 fast tokenizer. Based on SentencePiece. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A DeBERTa sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). DebertaV2Model class transformers.DebertaV2Model < source > ( config ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaV2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaV2Model >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2Model.from_pretrained("microsoft/deberta-v2-xlarge") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state DebertaV2PreTrainedModel class transformers.DebertaV2PreTrainedModel < source > ( config: PretrainedConfig *inputs **kwargs ) An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. _forward_unimplemented < source > ( *input: typing.Any ) Defines the computation performed at every call. Should be overridden by all subclasses. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them. DebertaV2ForMaskedLM class transformers.DebertaV2ForMaskedLM < source > ( config ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a language modeling head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaV2ForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaV2ForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2ForMaskedLM.from_pretrained("microsoft/deberta-v2-xlarge") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) DebertaV2ForSequenceClassification class transformers.DebertaV2ForSequenceClassification < source > ( config ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaV2ForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, DebertaV2ForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, DebertaV2ForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2ForSequenceClassification.from_pretrained("microsoft/deberta-v2-xlarge", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = DebertaV2ForSequenceClassification.from_pretrained( ... "microsoft/deberta-v2-xlarge", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss DebertaV2ForTokenClassification class transformers.DebertaV2ForTokenClassification < source > ( config ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaV2ForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaV2ForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2ForTokenClassification.from_pretrained("microsoft/deberta-v2-xlarge") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss DebertaV2ForQuestionAnswering class transformers.DebertaV2ForQuestionAnswering < source > ( config ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaV2ForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaV2ForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2ForQuestionAnswering.from_pretrained("microsoft/deberta-v2-xlarge") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([2]) >>> target_end_index = torch.tensor([9]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss DebertaV2ForMultipleChoice class transformers.DebertaV2ForMultipleChoice < source > ( config ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaV2ForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaV2ForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-v2-xlarge") >>> model = DebertaV2ForMultipleChoice.from_pretrained("microsoft/deberta-v2-xlarge") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits TFDebertaV2Model class transformers.TFDebertaV2Model < source > ( *args **kwargs ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaV2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaV2Model >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> model = TFDebertaV2Model.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFDebertaV2PreTrainedModel class transformers.TFDebertaV2PreTrainedModel < source > ( *args **kwargs ) An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. call < source > ( inputs training = None mask = None ) Calls the model on new inputs and returns the outputs as tensors. In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs). Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method. TFDebertaV2ForMaskedLM class transformers.TFDebertaV2ForMaskedLM < source > ( *args **kwargs ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a language modeling head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaV2ForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaV2ForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> model = TFDebertaV2ForMaskedLM.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) TFDebertaV2ForSequenceClassification class transformers.TFDebertaV2ForSequenceClassification < source > ( *args **kwargs ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaV2ForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaV2ForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> model = TFDebertaV2ForSequenceClassification.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFDebertaV2ForSequenceClassification.from_pretrained("kamalkraj/deberta-v2-xlarge", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFDebertaV2ForTokenClassification class transformers.TFDebertaV2ForTokenClassification < source > ( *args **kwargs ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaV2ForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaV2ForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> model = TFDebertaV2ForTokenClassification.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) TFDebertaV2ForQuestionAnswering class transformers.TFDebertaV2ForQuestionAnswering < source > ( *args **kwargs ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaV2ForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaV2ForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> model = TFDebertaV2ForQuestionAnswering.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) TFDebertaV2ForMultipleChoice class transformers.TFDebertaV2ForMultipleChoice < source > ( *args **kwargs ) Parameters config (DebertaV2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaV2Config) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaV2ForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaV2ForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> model = TFDebertaV2ForMultipleChoice.from_pretrained("kamalkraj/deberta-v2-xlarge") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/data2vec
Data2Vec Overview The Data2Vec model was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. Data2Vec proposes a unified framework for self-supervised learning across different data modalities - text, audio and images. Importantly, predicted targets for pre-training are contextualized latent representations of the inputs, rather than modality-specific, context-independent targets. The abstract from the paper is the following: While the general idea of self-supervised learning is identical across modalities, the actual algorithms and objectives differ widely because they were developed with a single modality in mind. To get us closer to general self-supervised learning, we present data2vec, a framework that uses the same learning method for either speech, NLP or computer vision. The core idea is to predict latent representations of the full input data based on a masked view of the input in a selfdistillation setup using a standard Transformer architecture. Instead of predicting modality-specific targets such as words, visual tokens or units of human speech which are local in nature, data2vec predicts contextualized latent representations that contain information from the entire input. Experiments on the major benchmarks of speech recognition, image classification, and natural language understanding demonstrate a new state of the art or competitive performance to predominant approaches. Models and code are available at www.github.com/pytorch/fairseq/tree/master/examples/data2vec. Tips: Data2VecAudio, Data2VecText, and Data2VecVision have all been trained using the same self-supervised learning method. For Data2VecAudio, preprocessing is identical to Wav2Vec2Model, including feature extraction For Data2VecText, preprocessing is identical to RobertaModel, including tokenization. For Data2VecVision, preprocessing is identical to BeitModel, including feature extraction. This model was contributed by edugp and patrickvonplaten. sayakpaul and Rocketknight1 contributed Data2Vec for vision in TensorFlow. The original code (for NLP and Speech) can be found here. The original code for vision can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Data2Vec. Image Classification Data2VecVisionForImageClassification is supported by this example script and notebook. To fine-tune TFData2VecVisionForImageClassification on a custom dataset, see this notebook. Data2VecText documentation resources Text classification task guide Token classification task guide Question answering task guide Causal language modeling task guide Masked language modeling task guide Multiple choice task guide Data2VecAudio documentation resources Audio classification task guide Automatic speech recognition task guide Data2VecVision documentation resources Image classification Semantic segmentation If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Data2VecTextConfig class transformers.Data2VecTextConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' use_cache = True classifier_dropout = None **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the DATA2VEC model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling Data2VecModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling Data2VecModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). is_decoder (bool, optional, defaults to False) — Whether the model is used as a decoder or not. If False, the model is used as an encoder. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). Only relevant if config.is_decoder=True. classifier_dropout (float, optional) — The dropout ratio for the classification head. This is the configuration class to store the configuration of a Data2VecTextModel and Data2VecTextModel. It is used to instantiate a Data2VecText model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Data2VecText facebook/data2vec-text-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import Data2VecTextConfig, Data2VecTextModel >>> >>> configuration = Data2VecTextConfig() >>> >>> model = Data2VecTextModel(configuration) >>> >>> configuration = model.config Data2VecAudioConfig class transformers.Data2VecAudioConfig < source > ( vocab_size = 32 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout = 0.1 activation_dropout = 0.1 attention_dropout = 0.1 feat_proj_dropout = 0.0 final_dropout = 0.1 layerdrop = 0.1 initializer_range = 0.02 layer_norm_eps = 1e-05 feat_extract_activation = 'gelu' conv_dim = (512, 512, 512, 512, 512, 512, 512) conv_stride = (5, 2, 2, 2, 2, 2, 2) conv_kernel = (10, 3, 3, 3, 3, 2, 2) conv_bias = False num_conv_pos_embedding_groups = 16 conv_pos_kernel_size = 19 num_conv_pos_embeddings = 5 mask_time_prob = 0.05 mask_time_length = 10 mask_time_min_masks = 2 mask_feature_prob = 0.0 mask_feature_length = 10 mask_feature_min_masks = 0 ctc_loss_reduction = 'sum' ctc_zero_infinity = False use_weighted_layer_sum = False classifier_proj_size = 256 tdnn_dim = (512, 512, 512, 512, 1500) tdnn_kernel = (5, 3, 3, 1, 1) tdnn_dilation = (1, 2, 3, 1, 1) xvector_output_dim = 512 pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 add_adapter = False adapter_kernel_size = 3 adapter_stride = 2 num_adapter_layers = 3 output_hidden_size = None **kwargs ) Parameters vocab_size (int, optional, defaults to 32) — Vocabulary size of the Data2VecAudio model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling Data2VecAudioModel or TFData2VecAudioModel. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of Data2VecAudioModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (float, optional, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer. attention_dropout (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. final_dropout (float, optional, defaults to 0.1) — The dropout probability for the final projection layer of Data2VecAudioForCTC. layerdrop (float, optional, defaults to 0.1) — The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. feat_proj_dropout (float, optional, defaults to 0.0) — The dropout probability for output of the feature encoder. feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported. conv_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of conv_dim defines the number of 1D convolutional layers. conv_stride (Tuple[int] or List[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim. conv_kernel (Tuple[int] or List[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim. conv_bias (bool, optional, defaults to False) — Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (int, optional, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (int, optional, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer. mask_time_prob (float, optional, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be `prob_vector_startmask_time_length`. Note that overlap may decrease the mask_time_length (int, optional, defaults to 10) — Length of vector span along the time axis. mask_time_min_masks (int, optional, defaults to 2), — The minimum number of masks of length mask_feature_length generated along the time axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks” mask_feature_prob (float, optional, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`. mask_feature_length (int, optional, defaults to 10) — Length of vector span along the feature axis. mask_feature_min_masks (int, optional, defaults to 0), — The minimum number of masks of length mask_feature_length generated along the feature axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks” ctc_loss_reduction (str, optional, defaults to "sum") — Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an instance of Data2VecAudioForCTC. ctc_zero_infinity (bool, optional, defaults to False) — Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of Data2VecAudioForCTC. use_weighted_layer_sum (bool, optional, defaults to False) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of Data2VecAudioForSequenceClassification. classifier_proj_size (int, optional, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification. tdnn_dim (Tuple[int] or List[int], optional, defaults to (512, 512, 512, 512, 1500)) — A tuple of integers defining the number of output channels of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_dim defines the number of TDNN layers. tdnn_kernel (Tuple[int] or List[int], optional, defaults to (5, 3, 3, 1, 1)) — A tuple of integers defining the kernel size of each 1D convolutional layer in the TDNN module of the XVector model. The length of tdnn_kernel has to match the length of tdnn_dim. tdnn_dilation (Tuple[int] or List[int], optional, defaults to (1, 2, 3, 1, 1)) — A tuple of integers defining the dilation factor of each 1D convolutional layer in TDNN module of the XVector model. The length of tdnn_dilation has to match the length of tdnn_dim. xvector_output_dim (int, optional, defaults to 512) — Dimensionality of the XVector embedding vectors. add_adapter (bool, optional, defaults to False) — Whether a convolutional network should be stacked on top of the Data2VecAudio Encoder. Can be very useful for warm-starting Data2VecAudio for SpeechEncoderDecoder models. adapter_kernel_size (int, optional, defaults to 3) — Kernel size of the convolutional layers in the adapter network. Only relevant if add_adapter is True. adapter_stride (int, optional, defaults to 2) — Stride of the convolutional layers in the adapter network. Only relevant if add_adapter is True. num_adapter_layers (int, optional, defaults to 3) — Number of convolutional layers that should be used in the adapter network. Only relevant if add_adapter is True. output_hidden_size (int, optional) — Dimensionality of the encoder output layer. If not defined, this defaults to hidden-size. Only relevant if add_adapter is True. This is the configuration class to store the configuration of a Data2VecAudioModel. It is used to instantiate an Data2VecAudio model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Data2VecAudio facebook/data2vec-audio-base-960h architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import Data2VecAudioConfig, Data2VecAudioModel >>> >>> configuration = Data2VecAudioConfig() >>> >>> model = Data2VecAudioModel(configuration) >>> >>> configuration = model.config Data2VecVisionConfig class transformers.Data2VecVisionConfig < source > ( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-12 image_size = 224 patch_size = 16 num_channels = 3 use_mask_token = False use_absolute_position_embeddings = False use_relative_position_bias = False use_shared_relative_position_bias = False layer_scale_init_value = 0.1 drop_path_rate = 0.1 use_mean_pooling = True out_indices = [3, 5, 7, 11] pool_scales = [1, 2, 3, 6] use_auxiliary_head = True auxiliary_loss_weight = 0.4 auxiliary_channels = 256 auxiliary_num_convs = 1 auxiliary_concat_input = False semantic_loss_ignore_index = 255 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. use_mask_token (bool, optional, defaults to False) — Whether to use a mask token for masked image modeling. use_absolute_position_embeddings (bool, optional, defaults to False) — Whether to use BERT-style absolute position embeddings. use_relative_position_bias (bool, optional, defaults to False) — Whether to use T5-style relative position embeddings in the self-attention layers. use_shared_relative_position_bias (bool, optional, defaults to False) — Whether to use the same relative position embeddings across all self-attention layers of the Transformer. layer_scale_init_value (float, optional, defaults to 0.1) — Scale to use in the self-attention layers. 0.1 for base, 1e-5 for large. Set 0 to disable layer scale. drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate per sample (when applied in the main path of residual layers). use_mean_pooling (bool, optional, defaults to True) — Whether to mean pool the final hidden states of the patches instead of using the final hidden state of the CLS token, before applying the classification head. out_indices (List[int], optional, defaults to [3, 5, 7, 11]) — Indices of the feature maps to use for semantic segmentation. pool_scales (Tuple[int], optional, defaults to [1, 2, 3, 6]) — Pooling scales used in Pooling Pyramid Module applied on the last feature map. use_auxiliary_head (bool, optional, defaults to True) — Whether to use an auxiliary head during training. auxiliary_loss_weight (float, optional, defaults to 0.4) — Weight of the cross-entropy loss of the auxiliary head. auxiliary_channels (int, optional, defaults to 256) — Number of channels to use in the auxiliary head. auxiliary_num_convs (int, optional, defaults to 1) — Number of convolutional layers to use in the auxiliary head. auxiliary_concat_input (bool, optional, defaults to False) — Whether to concatenate the output of the auxiliary head with the input before the classification layer. semantic_loss_ignore_index (int, optional, defaults to 255) — The index that is ignored by the loss function of the semantic segmentation model. This is the configuration class to store the configuration of a Data2VecVisionModel. It is used to instantiate an Data2VecVision model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Data2VecVision facebook/data2vec-vision-base architecture. Example: >>> from transformers import Data2VecVisionConfig, Data2VecVisionModel >>> >>> configuration = Data2VecVisionConfig() >>> >>> model = Data2VecVisionModel(configuration) >>> >>> configuration = model.config Data2VecAudioModel class transformers.Data2VecAudioModel < source > ( config: Data2VecAudioConfig ) Parameters config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Data2VecAudio Model transformer outputting raw hidden-states without any specific head on top. Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None mask_time_indices: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.Wav2Vec2BaseModelOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as data2vec-audio-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.Wav2Vec2BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecAudioConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. extract_features (torch.FloatTensor of shape (batch_size, sequence_length, conv_dim[-1])) — Sequence of extracted feature vectors of the last convolutional layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecAudioModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoProcessor, Data2VecAudioModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/data2vec-audio-base-960h") >>> model = Data2VecAudioModel.from_pretrained("facebook/data2vec-audio-base-960h") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 292, 768] Data2VecAudioForAudioFrameClassification class transformers.Data2VecAudioForAudioFrameClassification < source > ( config ) Parameters config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecAudio Model with a frame classification head on top for tasks like Speaker Diarization. Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as data2vec-audio-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecAudioConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecAudioForAudioFrameClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoFeatureExtractor, Data2VecAudioForAudioFrameClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h") >>> model = Data2VecAudioForAudioFrameClassification.from_pretrained("facebook/data2vec-audio-base-960h") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], return_tensors="pt", sampling_rate=sampling_rate) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> probabilities = torch.sigmoid(logits[0]) >>> >>> labels = (probabilities > 0.5).long() Data2VecAudioForCTC class transformers.Data2VecAudioForCTC < source > ( config ) Parameters config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecAudio Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as data2vec-audio-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, target_length), optional) — Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1]. A transformers.modeling_outputs.CausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecAudioConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecAudioForCTC forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoProcessor, Data2VecAudioForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/data2vec-audio-base-960h") >>> model = Data2VecAudioForCTC.from_pretrained("facebook/data2vec-audio-base-960h") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 66.95 Data2VecAudioForSequenceClassification class transformers.Data2VecAudioForSequenceClassification < source > ( config ) Parameters config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecAudio Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as data2vec-audio-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecAudioConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecAudioForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoFeatureExtractor, Data2VecAudioForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h") >>> model = Data2VecAudioForSequenceClassification.from_pretrained("facebook/data2vec-audio-base-960h") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss Data2VecAudioForXVector class transformers.Data2VecAudioForXVector < source > ( config ) Parameters config (Data2VecAudioConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecAudio Model with an XVector feature extraction head on top for tasks like Speaker Verification. Data2VecAudio was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.XVectorOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as data2vec-audio-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.XVectorOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecAudioConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Classification hidden states before AMSoftmax. embeddings (torch.FloatTensor of shape (batch_size, config.xvector_output_dim)) — Utterance embeddings used for vector similarity-based retrieval. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecAudioForXVector forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoFeatureExtractor, Data2VecAudioForXVector >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/data2vec-audio-base-960h") >>> model = Data2VecAudioForXVector.from_pretrained("facebook/data2vec-audio-base-960h") >>> >>> inputs = feature_extractor( ... [d["array"] for d in dataset[:2]["audio"]], sampling_rate=sampling_rate, return_tensors="pt", padding=True ... ) >>> with torch.no_grad(): ... embeddings = model(**inputs).embeddings >>> embeddings = torch.nn.functional.normalize(embeddings, dim=-1).cpu() >>> >>> cosine_sim = torch.nn.CosineSimilarity(dim=-1) >>> similarity = cosine_sim(embeddings[0], embeddings[1]) >>> threshold = 0.7 >>> if similarity < threshold: ... print("Speakers are not the same!") Data2VecTextModel class transformers.Data2VecTextModel < source > ( config add_pooling_layer = True ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Data2VecText Model for text transformer outputting raw hidden-states without any specific head on top. Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need_ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. To behave as an decoder the model needs to be initialized with the is_decoder argument of the configuration set to True. To be used in a Seq2Seq model, the model needs to initialized with both is_decoder argument and add_cross_attention set to True; an encoder_hidden_states is then expected as an input to the forward pass. .. _Attention is all you need: https://arxiv.org/abs/1706.03762 forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.FloatTensor]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The Data2VecTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, Data2VecTextModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextModel.from_pretrained("facebook/data2vec-text-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state Data2VecTextForCausalLM class transformers.Data2VecTextForCausalLM < source > ( config ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecText Model with a language modeling head on top for CLM fine-tuning. Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder. encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The Data2VecTextForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, Data2VecTextForCausalLM, Data2VecTextConfig >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> config = Data2VecTextConfig.from_pretrained("facebook/data2vec-text-base") >>> config.is_decoder = True >>> model = Data2VecTextForCausalLM.from_pretrained("facebook/data2vec-text-base", config=config) >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> prediction_logits = outputs.logits Data2VecTextForMaskedLM class transformers.Data2VecTextForMaskedLM < source > ( config ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. data2vec Model with a language modeling head on top. Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[torch.FloatTensor] = None encoder_attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecTextForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, Data2VecTextForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextForMaskedLM.from_pretrained("facebook/data2vec-text-base") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) Data2VecTextForSequenceClassification class transformers.Data2VecTextForSequenceClassification < source > ( config ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecText Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecTextForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, Data2VecTextForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, Data2VecTextForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextForSequenceClassification.from_pretrained("facebook/data2vec-text-base", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = Data2VecTextForSequenceClassification.from_pretrained( ... "facebook/data2vec-text-base", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss Data2VecTextForMultipleChoice class transformers.Data2VecTextForMultipleChoice < source > ( config ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecText Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecTextForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, Data2VecTextForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextForMultipleChoice.from_pretrained("facebook/data2vec-text-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits Data2VecTextForTokenClassification class transformers.Data2VecTextForTokenClassification < source > ( config ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecText Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecTextForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, Data2VecTextForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextForTokenClassification.from_pretrained("facebook/data2vec-text-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss Data2VecTextForQuestionAnswering class transformers.Data2VecTextForQuestionAnswering < source > ( config ) Parameters config (Data2VecTextConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecText Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). Data2VecText was proposed in data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language by Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu and Michael Auli. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecTextConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecTextForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, Data2VecTextForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("facebook/data2vec-text-base") >>> model = Data2VecTextForQuestionAnswering.from_pretrained("facebook/data2vec-text-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss Data2VecVisionModel class transformers.Data2VecVisionModel < source > ( config: Data2VecVisionConfig add_pooling_layer: bool = False ) Parameters config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Data2VecVision Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.data2vec.modeling_data2vec_vision.Data2VecVisionModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.models.data2vec.modeling_data2vec_vision.Data2VecVisionModelOutputWithPooling or tuple(torch.FloatTensor) A transformers.models.data2vec.modeling_data2vec_vision.Data2VecVisionModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecVisionConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token will be returned. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, Data2VecVisionModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base") >>> model = Data2VecVisionModel.from_pretrained("facebook/data2vec-vision-base") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 197, 768] Data2VecVisionForImageClassification class transformers.Data2VecVisionForImageClassification < source > ( config: Data2VecVisionConfig ) Parameters config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecVision Model transformer with an image classification head on top (a linear layer on top of the average of the final hidden states of the patch tokens) e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecVisionConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecVisionForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, Data2VecVisionForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base-ft1k") >>> model = Data2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base-ft1k") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) remote control, remote Data2VecVisionForSemanticSegmentation class transformers.Data2VecVisionForSemanticSegmentation < source > ( config: Data2VecVisionConfig ) Parameters config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecVision Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SemanticSegmenterOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, height, width), optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SemanticSegmenterOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecVisionConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel. The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Data2VecVisionForSemanticSegmentation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, Data2VecVisionForSemanticSegmentation >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base") >>> model = Data2VecVisionForSemanticSegmentation.from_pretrained("facebook/data2vec-vision-base") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> logits = outputs.logits TFData2VecVisionModel class transformers.TFData2VecVisionModel < source > ( *args **kwargs ) Parameters config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Data2VecVision Model transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.). This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with pixel_values only and nothing else: model(pixel_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( pixel_values: TFModelInputType | None = None bool_masked_pos: tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.models.data2vec.modeling_tf_data2vec_vision.TFData2VecVisionModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details. head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). bool_masked_pos (tf.Tensor of shape (batch_size, num_patches), optional) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.models.data2vec.modeling_tf_data2vec_vision.TFData2VecVisionModelOutputWithPooling or tuple(tf.Tensor) A transformers.models.data2vec.modeling_tf_data2vec_vision.TFData2VecVisionModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecVisionConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Average of the last layer hidden states of the patch tokens (excluding the [CLS] token) if config.use_mean_pooling is set to True. If set to False, then the final hidden state of the [CLS] token will be returned. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFData2VecVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, TFData2VecVisionModel >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base") >>> model = TFData2VecVisionModel.from_pretrained("facebook/data2vec-vision-base") >>> inputs = image_processor(image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 197, 768] TFData2VecVisionForImageClassification class transformers.TFData2VecVisionForImageClassification < source > ( *args **kwargs ) Parameters config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecVision Model transformer with an image classification head on top (a linear layer on top of the average of the final hidden states of the patch tokens) e.g. for ImageNet. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.). This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with pixel_values only and nothing else: model(pixel_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( pixel_values: TFModelInputType | None = None head_mask: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details. head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecVisionConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFData2VecVisionForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, TFData2VecVisionForImageClassification >>> import tensorflow as tf >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base-ft1k") >>> model = TFData2VecVisionForImageClassification.from_pretrained("facebook/data2vec-vision-base-ft1k") >>> inputs = image_processor(image, return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> predicted_label = int(tf.math.argmax(logits, axis=-1)) >>> print(model.config.id2label[predicted_label]) remote control, remote TFData2VecVisionForSemanticSegmentation class transformers.TFData2VecVisionForSemanticSegmentation < source > ( *args **kwargs ) Parameters config (Data2VecVisionConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Data2VecVision Model transformer with a semantic segmentation head on top e.g. for ADE20k, CityScapes. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.). This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with pixel_values only and nothing else: model(pixel_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([pixel_values, attention_mask]) or model([pixel_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"pixel_values": pixel_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( pixel_values: tf.Tensor | None = None head_mask: tf.Tensor | None = None labels: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None ) → transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BeitImageProcessor.call() for details. head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, height, width), optional) — Ground truth semantic segmentation maps for computing the loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels > 1, a classification loss is computed (Cross-Entropy). Returns transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or tuple(tf.Tensor) A transformers.modeling_tf_outputs.TFSemanticSegmenterOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Data2VecVisionConfig) and inputs. loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel. The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, patch_size, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFData2VecVisionForSemanticSegmentation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFData2VecVisionForSemanticSegmentation >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/data2vec-vision-base") >>> model = TFData2VecVisionForSemanticSegmentation.from_pretrained("facebook/data2vec-vision-base") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> logits = outputs.logits
https://huggingface.co/docs/transformers/model_doc/deberta
DeBERTa Overview The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen It is based on Google’s BERT model released in 2018 and Facebook’s RoBERTa model released in 2019. It builds on RoBERTa with disentangled attention and enhanced mask decoder training with half of the data used in RoBERTa. The abstract from the paper is the following: Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions. Second, an enhanced mask decoder is used to replace the output softmax layer to predict the masked tokens for model pretraining. We show that these two techniques significantly improve the efficiency of model pretraining and performance of downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). The DeBERTa code and pre-trained models will be made publicly available at https://github.com/microsoft/DeBERTa. This model was contributed by DeBERTa. This model TF 2.0 implementation was contributed by kamalkraj . The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeBERTa. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Text Classification A blog post on how to Accelerate Large Model Training using DeepSpeed with DeBERTa. A blog post on Supercharged Customer Service with Machine Learning with DeBERTa. DebertaForSequenceClassification is supported by this example script and notebook. TFDebertaForSequenceClassification is supported by this example script and notebook. Text classification task guide Token Classification DebertaForTokenClassification is supported by this example script and notebook. TFDebertaForTokenClassification is supported by this example script and notebook. Token classification chapter of the 🤗 Hugging Face Course. Byte-Pair Encoding tokenization chapter of the 🤗 Hugging Face Course. Token classification task guide Fill-Mask DebertaForMaskedLM is supported by this example script and notebook. TFDebertaForMaskedLM is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide Question Answering DebertaForQuestionAnswering is supported by this example script and notebook. TFDebertaForQuestionAnswering is supported by this example script and notebook. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide DebertaConfig class transformers.DebertaConfig < source > ( vocab_size = 50265 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 0 initializer_range = 0.02 layer_norm_eps = 1e-07 relative_attention = False max_relative_positions = -1 pad_token_id = 0 position_biased_input = True pos_att_type = None pooler_dropout = 0 pooler_hidden_act = 'gelu' **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the DeBERTa model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling DebertaModel or TFDebertaModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu", "gelu", "tanh", "gelu_fast", "mish", "linear", "sigmoid" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling DebertaModel or TFDebertaModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. relative_attention (bool, optional, defaults to False) — Whether use relative position encoding. max_relative_positions (int, optional, defaults to 1) — The range of relative positions [-max_position_embeddings, max_position_embeddings]. Use the same value as max_position_embeddings. pad_token_id (int, optional, defaults to 0) — The value used to pad input_ids. position_biased_input (bool, optional, defaults to True) — Whether add absolute position embedding to content embedding. pos_att_type (List[str], optional) — The type of relative position attention, it can be a combination of ["p2c", "c2p"], e.g. ["p2c"], ["p2c", "c2p"]. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. This is the configuration class to store the configuration of a DebertaModel or a TFDebertaModel. It is used to instantiate a DeBERTa model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DeBERTa microsoft/deberta-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DebertaConfig, DebertaModel >>> >>> configuration = DebertaConfig() >>> >>> model = DebertaModel(configuration) >>> >>> configuration = model.config DebertaTokenizer class transformers.DebertaTokenizer < source > ( vocab_file merges_file errors = 'replace' bos_token = '[CLS]' eos_token = '[SEP]' sep_token = '[SEP]' cls_token = '[CLS]' unk_token = '[UNK]' pad_token = '[PAD]' mask_token = '[MASK]' add_prefix_space = False add_bos_token = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. bos_token (str, optional, defaults to "[CLS]") — The beginning of sequence token. eos_token (str, optional, defaults to "[SEP]") — The end of sequence token. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Deberta tokenizer detect beginning of words by the preceding space). add_bos_token (bool, optional, defaults to False) — Whether or not to add an initial <|endoftext|> to the input. This allows to treat the leading word just as any other word. Construct a DeBERTa tokenizer. Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import DebertaTokenizer >>> tokenizer = DebertaTokenizer.from_pretrained("microsoft/deberta-base") >>> tokenizer("Hello world")["input_ids"] [1, 31414, 232, 2] >>> tokenizer(" Hello world")["input_ids"] [1, 20920, 232, 2] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer or when you call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer will add a space before each word (even the first one). This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A DeBERTa sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None ) DebertaTokenizerFast class transformers.DebertaTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None errors = 'replace' bos_token = '[CLS]' eos_token = '[SEP]' sep_token = '[SEP]' cls_token = '[CLS]' unk_token = '[UNK]' pad_token = '[PAD]' mask_token = '[MASK]' add_prefix_space = False **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. tokenizer_file (str, optional) — The path to a tokenizer file to use instead of the vocab file. errors (str, optional, defaults to "replace") — Paradigm to follow when decoding bytes to UTF-8. See bytes.decode for more information. bos_token (str, optional, defaults to "[CLS]") — The beginning of sequence token. eos_token (str, optional, defaults to "[SEP]") — The end of sequence token. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. add_prefix_space (bool, optional, defaults to False) — Whether or not to add an initial space to the input. This allows to treat the leading word just as any other word. (Deberta tokenizer detect beginning of words by the preceding space). Construct a “fast” DeBERTa tokenizer (backed by HuggingFace’s tokenizers library). Based on byte-level Byte-Pair-Encoding. This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will be encoded differently whether it is at the beginning of the sentence (without space) or not: >>> from transformers import DebertaTokenizerFast >>> tokenizer = DebertaTokenizerFast.from_pretrained("microsoft/deberta-base") >>> tokenizer("Hello world")["input_ids"] [1, 31414, 232, 2] >>> tokenizer(" Hello world")["input_ids"] [1, 20920, 232, 2] You can get around that behavior by passing add_prefix_space=True when instantiating this tokenizer, but since the model was not pretrained this way, it might yield a decrease in performance. When used with is_split_into_words=True, this tokenizer needs to be instantiated with add_prefix_space=True. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A DeBERTa sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A DeBERTa sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). DebertaModel class transformers.DebertaModel < source > ( config ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base") >>> model = DebertaModel.from_pretrained("microsoft/deberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state DebertaPreTrainedModel class transformers.DebertaPreTrainedModel < source > ( config: PretrainedConfig *inputs **kwargs ) An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. DebertaForMaskedLM class transformers.DebertaForMaskedLM < source > ( config ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a language modeling head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("lsanochkin/deberta-large-feedback") >>> model = DebertaForMaskedLM.from_pretrained("lsanochkin/deberta-large-feedback") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> tokenizer.decode(predicted_token_id) ' Paris' >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) >>> round(outputs.loss.item(), 2) 0.54 DebertaForSequenceClassification class transformers.DebertaForSequenceClassification < source > ( config ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, DebertaForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base") >>> model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, DebertaForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base") >>> model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = DebertaForSequenceClassification.from_pretrained( ... "microsoft/deberta-base", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss DebertaForTokenClassification class transformers.DebertaForTokenClassification < source > ( config ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base") >>> model = DebertaForTokenClassification.from_pretrained("microsoft/deberta-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss DebertaForQuestionAnswering class transformers.DebertaForQuestionAnswering < source > ( config ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DebertaForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DebertaForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Palak/microsoft_deberta-large_squad") >>> model = DebertaForQuestionAnswering.from_pretrained("Palak/microsoft_deberta-large_squad") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> tokenizer.decode(predict_answer_tokens, skip_special_tokens=True) ' a nice puppet' >>> >>> target_start_index = torch.tensor([12]) >>> target_end_index = torch.tensor([14]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss >>> round(loss.item(), 2) 0.14 TFDebertaModel class transformers.TFDebertaModel < source > ( *args **kwargs ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DeBERTa Model transformer outputting raw hidden-states without any specific head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base") >>> model = TFDebertaModel.from_pretrained("kamalkraj/deberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFDebertaPreTrainedModel class transformers.TFDebertaPreTrainedModel < source > ( *args **kwargs ) An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. call < source > ( inputs training = None mask = None ) Calls the model on new inputs and returns the outputs as tensors. In this case call() just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs). Note: This method should not be called directly. It is only meant to be overridden when subclassing tf.keras.Model. To call a model on an input, always use the __call__() method, i.e. model(inputs), which relies on the underlying call() method. TFDebertaForMaskedLM class transformers.TFDebertaForMaskedLM < source > ( *args **kwargs ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a language modeling head on top. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base") >>> model = TFDebertaForMaskedLM.from_pretrained("kamalkraj/deberta-base") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) TFDebertaForSequenceClassification class transformers.TFDebertaForSequenceClassification < source > ( *args **kwargs ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base") >>> model = TFDebertaForSequenceClassification.from_pretrained("kamalkraj/deberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFDebertaForSequenceClassification.from_pretrained("kamalkraj/deberta-base", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFDebertaForTokenClassification class transformers.TFDebertaForTokenClassification < source > ( *args **kwargs ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base") >>> model = TFDebertaForTokenClassification.from_pretrained("kamalkraj/deberta-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) TFDebertaForQuestionAnswering class transformers.TFDebertaForQuestionAnswering < source > ( *args **kwargs ) Parameters config (DebertaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeBERTa Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The DeBERTa model was proposed in DeBERTa: Decoding-enhanced BERT with Disentangled Attention by Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen. It’s build on top of BERT/RoBERTa with two improvements, i.e. disentangled attention and enhanced mask decoder. With those two improvements, it out perform BERT/RoBERTa on a majority of tasks with 80GB pretraining data. This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None token_type_ids: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? inputs_embeds (np.ndarray or tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a [`~utils.ModelOutput“] instead of a plain tuple. start_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor or np.ndarray of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DebertaConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDebertaForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDebertaForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("kamalkraj/deberta-base") >>> model = TFDebertaForQuestionAnswering.from_pretrained("kamalkraj/deberta-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss)
https://huggingface.co/docs/transformers/model_doc/deplot
DePlot Overview DePlot was proposed in the paper DePlot: One-shot visual language reasoning by plot-to-table translation from Fangyu Liu, Julian Martin Eisenschlos, Francesco Piccinno, Syrine Krichene, Chenxi Pang, Kenton Lee, Mandar Joshi, Wenhu Chen, Nigel Collier, Yasemin Altun. The abstract of the paper states the following: Visual language such as charts and plots is ubiquitous in the human world. Comprehending plots and charts requires strong reasoning skills. Prior state-of-the-art (SOTA) models require at least tens of thousands of training examples and their reasoning capabilities are still much limited, especially on complex human-written queries. This paper presents the first one-shot solution to visual language reasoning. We decompose the challenge of visual language reasoning into two steps: (1) plot-to-text translation, and (2) reasoning over the translated text. The key in this method is a modality conversion module, named as DePlot, which translates the image of a plot or chart to a linearized table. The output of DePlot can then be directly used to prompt a pretrained large language model (LLM), exploiting the few-shot reasoning capabilities of LLMs. To obtain DePlot, we standardize the plot-to-table task by establishing unified task formats and metrics, and train DePlot end-to-end on this task. DePlot can then be used off-the-shelf together with LLMs in a plug-and-play fashion. Compared with a SOTA model finetuned on more than >28k data points, DePlot+LLM with just one-shot prompting achieves a 24.0% improvement over finetuned SOTA on human-written queries from the task of chart QA. Model description DePlot is a model that is trained using Pix2Struct architecture. You can find more information about Pix2Struct in the Pix2Struct documentation. DePlot is a Visual Question Answering subset of Pix2Struct architecture. It renders the input question on the image and predicts the answer. Usage Currently one checkpoint is available for DePlot: google/deplot: DePlot fine-tuned on ChartQA dataset from transformers import AutoProcessor, Pix2StructForConditionalGeneration import requests from PIL import Image model = Pix2StructForConditionalGeneration.from_pretrained("google/deplot") processor = AutoProcessor.from_pretrained("google/deplot") url = "https://raw.githubusercontent.com/vis-nlp/ChartQA/main/ChartQA%20Dataset/val/png/5090.png" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, text="Generate underlying data table of the figure below:", return_tensors="pt") predictions = model.generate(**inputs, max_new_tokens=512) print(processor.decode(predictions[0], skip_special_tokens=True)) Fine-tuning To fine-tune DePlot, refer to the pix2struct fine-tuning notebook. For Pix2Struct models, we have found out that fine-tuning the model with Adafactor and cosine learning rate scheduler leads to faster convergence: from transformers.optimization import Adafactor, get_cosine_schedule_with_warmup optimizer = Adafactor(self.parameters(), scale_parameter=False, relative_step=False, lr=0.01, weight_decay=1e-05) scheduler = get_cosine_schedule_with_warmup(optimizer, num_warmup_steps=1000, num_training_steps=40000)
https://huggingface.co/docs/transformers/model_doc/deformable_detr
Deformable DETR Overview The Deformable DETR model was proposed in Deformable DETR: Deformable Transformers for End-to-End Object Detection by Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, Jifeng Dai. Deformable DETR mitigates the slow convergence issues and limited feature spatial resolution of the original DETR by leveraging a new deformable attention module which only attends to a small set of key sampling points around a reference. The abstract from the paper is the following: DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Tips: One can use DeformableDetrImageProcessor to prepare images (and optional targets) for the model. Training Deformable DETR is equivalent to training the original DETR model. See the resources section below for demo notebooks. Deformable DETR architecture. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Deformable DETR. Object Detection Demo notebooks regarding inference + fine-tuning on a custom dataset for DeformableDetrForObjectDetection can be found here. See also: Object detection task guide. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DeformableDetrImageProcessor class transformers.DeformableDetrImageProcessor < source > ( format: typing.Union[str, transformers.models.deformable_detr.image_processing_deformable_detr.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float]] = None image_std: typing.Union[float, typing.List[float]] = None do_pad: bool = True **kwargs ) Parameters format (str, optional, defaults to "coco_detection") — Data format of the annotations. One of “coco_detection” or “coco_panoptic”. do_resize (bool, optional, defaults to True) — Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}): Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. do_rescale (bool, optional, defaults to True) — Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize — Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) — Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) — Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_std parameter in the preprocess method. do_pad (bool, optional, defaults to True) — Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be overridden by the do_pad parameter in the preprocess method. Constructs a Deformable DETR image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None return_segmentation_masks: bool = None masks_path: typing.Union[str, pathlib.Path, NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Union[int, float, NoneType] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_pad: typing.Optional[bool] = None format: typing.Union[str, transformers.models.deformable_detr.image_processing_deformable_detr.AnnotionFormat, NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. annotations (AnnotationType or List[AnnotationType], optional) — List of annotations associated with the image or batch of images. If annotation is for object detection, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “annotations” (List[Dict]): List of annotations for an image. Each annotation should be a dictionary. An image can have no annotations, in which case the list should be empty. If annotation is for segmentation, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary. An image can have no segments, in which case the list should be empty. “file_name” (str): The file name of the image. return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks. masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use when resizing the image. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to use when rescaling the image. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Mean to use when normalizing the image. image_std (float or List[float], optional, defaults to self.image_std) — Standard deviation to use when normalizing the image. do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image. format (str or AnnotionFormat, optional, defaults to self.format) — Format of the annotations. return_tensors (str or TensorType, optional, defaults to self.return_tensors) — Type of tensors to return. If None, will return the list of images. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or a batch of images so that it can be used by the model. post_process_object_detection < source > ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None top_k: int = 100 ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If left to None, predictions will not be resized. top_k (int, optional, defaults to 100) — Keep only top k bounding boxes before filtering by thresholding. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of DeformableDetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. DeformableDetrFeatureExtractor Preprocess an image or a batch of images. ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None top_k: int = 100 ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If left to None, predictions will not be resized. top_k (int, optional, defaults to 100) — Keep only top k bounding boxes before filtering by thresholding. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of DeformableDetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. DeformableDetrConfig class transformers.DeformableDetrConfig < source > ( use_timm_backbone = True backbone_config = None num_channels = 3 num_queries = 300 max_position_embeddings = 1024 encoder_layers = 6 encoder_ffn_dim = 1024 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 1024 decoder_attention_heads = 8 encoder_layerdrop = 0.0 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 init_xavier_std = 1.0 return_intermediate = True auxiliary_loss = False position_embedding_type = 'sine' backbone = 'resnet50' use_pretrained_backbone = True dilation = False num_feature_levels = 4 encoder_n_points = 4 decoder_n_points = 4 two_stage = False two_stage_num_proposals = 300 with_box_refine = False class_cost = 1 bbox_cost = 5 giou_cost = 2 mask_loss_coefficient = 1 dice_loss_coefficient = 1 bbox_loss_coefficient = 5 giou_loss_coefficient = 2 eos_coefficient = 0.1 focal_alpha = 0.25 disable_custom_kernels = False **kwargs ) Parameters use_timm_backbone (bool, optional, defaults to True) — Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone API. backbone_config (PretrainedConfig or dict, optional) — The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which case it will default to ResNetConfig(). num_channels (int, optional, defaults to 3) — The number of input channels. num_queries (int, optional, defaults to 300) — Number of object queries, i.e. detection slots. This is the maximal number of objects DeformableDetrModel can detect in a single image. In case two_stage is set to True, we use two_stage_num_proposals instead. d_model (int, optional, defaults to 256) — Dimension of the layers. encoder_layers (int, optional, defaults to 6) — Number of encoder layers. decoder_layers (int, optional, defaults to 6) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 1024) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 1024) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "relu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. init_xavier_std (float, optional, defaults to 1) — The scaling factor used for the Xavier initialization gain in the HM Attention map module. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. auxiliary_loss (bool, optional, defaults to False) — Whether auxiliary decoding losses (loss at each decoder layer) are to be used. position_embedding_type (str, optional, defaults to "sine") — Type of position embeddings to be used on top of the image features. One of "sine" or "learned". backbone (str, optional, defaults to "resnet50") — Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional backbone from the timm package. For a list of all available models, see this page. use_pretrained_backbone (bool, optional, defaults to True) — Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True. dilation (bool, optional, defaults to False) — Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when use_timm_backbone = True. class_cost (float, optional, defaults to 1) — Relative weight of the classification error in the Hungarian matching cost. bbox_cost (float, optional, defaults to 5) — Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost. giou_cost (float, optional, defaults to 2) — Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost. mask_loss_coefficient (float, optional, defaults to 1) — Relative weight of the Focal loss in the panoptic segmentation loss. dice_loss_coefficient (float, optional, defaults to 1) — Relative weight of the DICE/F-1 loss in the panoptic segmentation loss. bbox_loss_coefficient (float, optional, defaults to 5) — Relative weight of the L1 bounding box loss in the object detection loss. giou_loss_coefficient (float, optional, defaults to 2) — Relative weight of the generalized IoU loss in the object detection loss. eos_coefficient (float, optional, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss. num_feature_levels (int, optional, defaults to 4) — The number of input feature levels. encoder_n_points (int, optional, defaults to 4) — The number of sampled keys in each feature level for each attention head in the encoder. decoder_n_points (int, optional, defaults to 4) — The number of sampled keys in each feature level for each attention head in the decoder. two_stage (bool, optional, defaults to False) — Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of Deformable DETR, which are further fed into the decoder for iterative bounding box refinement. two_stage_num_proposals (int, optional, defaults to 300) — The number of region proposals to be generated, in case two_stage is set to True. with_box_refine (bool, optional, defaults to False) — Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes based on the predictions from the previous layer. focal_alpha (float, optional, defaults to 0.25) — Alpha parameter in the focal loss. disable_custom_kernels (bool, optional, defaults to False) — Disable the use of custom CUDA and CPU kernels. This option is necessary for the ONNX export, as custom kernels are not supported by PyTorch ONNX export. This is the configuration class to store the configuration of a DeformableDetrModel. It is used to instantiate a Deformable DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Deformable DETR SenseTime/deformable-detr architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import DeformableDetrConfig, DeformableDetrModel >>> >>> configuration = DeformableDetrConfig() >>> >>> model = DeformableDetrModel(configuration) >>> >>> configuration = model.config DeformableDetrModel class transformers.DeformableDetrModel < source > ( config: DeformableDetrConfig ) Parameters config (DeformableDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Deformable DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See DeformableDetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrModelOutput or tuple(torch.FloatTensor) A transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeformableDetrConfig) and inputs. init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder. last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder). intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder). decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background). enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage. The DeformableDetrModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DeformableDetrModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") >>> model = DeformableDetrModel.from_pretrained("SenseTime/deformable-detr") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 300, 256] DeformableDetrForObjectDetection class transformers.DeformableDetrForObjectDetection < source > ( config: DeformableDetrConfig ) Parameters config (DeformableDetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Deformable DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrObjectDetectionOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See DeformableDetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4). Returns transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrObjectDetectionOutput or tuple(torch.FloatTensor) A transformers.models.deformable_detr.modeling_deformable_detr.DeformableDetrObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeformableDetrConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use ~DeformableDetrProcessor.post_process_object_detection to retrieve the unnormalized bounding boxes. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder). intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder). init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder. enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background). enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage. The DeformableDetrForObjectDetection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DeformableDetrForObjectDetection >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("SenseTime/deformable-detr") >>> model = DeformableDetrForObjectDetection.from_pretrained("SenseTime/deformable-detr") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> target_sizes = torch.tensor([image.size[::-1]]) >>> results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[ ... 0 ... ] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected cat with confidence 0.8 at location [16.5, 52.84, 318.25, 470.78] Detected cat with confidence 0.789 at location [342.19, 24.3, 640.02, 372.25] Detected remote with confidence 0.633 at location [40.79, 72.78, 176.76, 117.25]
https://huggingface.co/docs/transformers/model_doc/dialogpt
DialoGPT Overview DialoGPT was proposed in DialoGPT: Large-Scale Generative Pre-training for Conversational Response Generation by Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, Bill Dolan. It’s a GPT2 Model trained on 147M conversation-like exchanges extracted from Reddit. The abstract from the paper is the following: We present a large, tunable neural conversational response generation model, DialoGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent open-domain dialogue systems. Tips: DialoGPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. DialoGPT was trained with a causal language modeling (CLM) objective on conversational data and is therefore powerful at response generation in open-domain dialogue systems. DialoGPT enables the user to create a chat bot in just 10 lines of code as shown on DialoGPT’s model card. Training: In order to train or fine-tune DialoGPT, one can use causal language modeling training. To cite the official paper: We follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language modeling. We first concatenate all dialog turns within a dialogue session into a long text x_1,…, x_N (N is the sequence length), ended by the end-of-text token. For more information please confer to the original paper. DialoGPT’s architecture is based on the GPT2 model, so one can refer to GPT2’s documentation page. The original code can be found here.
https://huggingface.co/docs/transformers/model_doc/deta
DETA Overview The DETA model was proposed in NMS Strikes Back by Jeffrey Ouyang-Zhang, Jang Hyun Cho, Xingyi Zhou, Philipp Krähenbühl. DETA (short for Detection Transformers with Assignment) improves Deformable DETR by replacing the one-to-one bipartite Hungarian matching loss with one-to-many label assignments used in traditional detectors with non-maximum suppression (NMS). This leads to significant gains of up to 2.5 mAP. The abstract from the paper is the following: Detection Transformer (DETR) directly transforms queries to unique objects by using one-to-one bipartite matching during training and enables end-to-end object detection. Recently, these models have surpassed traditional detectors on COCO with undeniable elegance. However, they differ from traditional detectors in multiple designs, including model architecture and training schedules, and thus the effectiveness of one-to-one matching is not fully understood. In this work, we conduct a strict comparison between the one-to-one Hungarian matching in DETRs and the one-to-many label assignments in traditional detectors with non-maximum supervision (NMS). Surprisingly, we observe one-to-many assignments with NMS consistently outperform standard one-to-one matching under the same setting, with a significant gain of up to 2.5 mAP. Our detector that trains Deformable-DETR with traditional IoU-based label assignment achieved 50.2 COCO mAP within 12 epochs (1x schedule) with ResNet50 backbone, outperforming all existing traditional or transformer-based detectors in this setting. On multiple datasets, schedules, and architectures, we consistently show bipartite matching is unnecessary for performant detection transformers. Furthermore, we attribute the success of detection transformers to their expressive transformer architecture. Tips: One can use DetaImageProcessor to prepare images and optional targets for the model. DETA overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETA. Demo notebooks for DETA can be found here. See also: Object detection task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DetaConfig class transformers.DetaConfig < source > ( backbone_config = None num_queries = 900 max_position_embeddings = 2048 encoder_layers = 6 encoder_ffn_dim = 2048 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 1024 decoder_attention_heads = 8 encoder_layerdrop = 0.0 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 init_xavier_std = 1.0 return_intermediate = True auxiliary_loss = False position_embedding_type = 'sine' num_feature_levels = 5 encoder_n_points = 4 decoder_n_points = 4 two_stage = True two_stage_num_proposals = 300 with_box_refine = True assign_first_stage = True class_cost = 1 bbox_cost = 5 giou_cost = 2 mask_loss_coefficient = 1 dice_loss_coefficient = 1 bbox_loss_coefficient = 5 giou_loss_coefficient = 2 eos_coefficient = 0.1 focal_alpha = 0.25 **kwargs ) Parameters backbone_config (PretrainedConfig or dict, optional, defaults to ResNetConfig()) — The configuration of the backbone model. num_queries (int, optional, defaults to 900) — Number of object queries, i.e. detection slots. This is the maximal number of objects DetaModel can detect in a single image. In case two_stage is set to True, we use two_stage_num_proposals instead. d_model (int, optional, defaults to 256) — Dimension of the layers. encoder_layers (int, optional, defaults to 6) — Number of encoder layers. decoder_layers (int, optional, defaults to 6) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 2048) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 2048) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "relu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. init_xavier_std (float, optional, defaults to 1) — The scaling factor used for the Xavier initialization gain in the HM Attention map module. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. auxiliary_loss (bool, optional, defaults to False) — Whether auxiliary decoding losses (loss at each decoder layer) are to be used. position_embedding_type (str, optional, defaults to "sine") — Type of position embeddings to be used on top of the image features. One of "sine" or "learned". class_cost (float, optional, defaults to 1) — Relative weight of the classification error in the Hungarian matching cost. bbox_cost (float, optional, defaults to 5) — Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost. giou_cost (float, optional, defaults to 2) — Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost. mask_loss_coefficient (float, optional, defaults to 1) — Relative weight of the Focal loss in the panoptic segmentation loss. dice_loss_coefficient (float, optional, defaults to 1) — Relative weight of the DICE/F-1 loss in the panoptic segmentation loss. bbox_loss_coefficient (float, optional, defaults to 5) — Relative weight of the L1 bounding box loss in the object detection loss. giou_loss_coefficient (float, optional, defaults to 2) — Relative weight of the generalized IoU loss in the object detection loss. eos_coefficient (float, optional, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss. num_feature_levels (int, optional, defaults to 5) — The number of input feature levels. encoder_n_points (int, optional, defaults to 4) — The number of sampled keys in each feature level for each attention head in the encoder. decoder_n_points (int, optional, defaults to 4) — The number of sampled keys in each feature level for each attention head in the decoder. two_stage (bool, optional, defaults to True) — Whether to apply a two-stage deformable DETR, where the region proposals are also generated by a variant of DETA, which are further fed into the decoder for iterative bounding box refinement. two_stage_num_proposals (int, optional, defaults to 300) — The number of region proposals to be generated, in case two_stage is set to True. with_box_refine (bool, optional, defaults to True) — Whether to apply iterative bounding box refinement, where each decoder layer refines the bounding boxes based on the predictions from the previous layer. focal_alpha (float, optional, defaults to 0.25) — Alpha parameter in the focal loss. This is the configuration class to store the configuration of a DetaModel. It is used to instantiate a DETA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DETA SenseTime/deformable-detr architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import DetaConfig, DetaModel >>> >>> configuration = DetaConfig() >>> >>> model = DetaModel(configuration) >>> >>> configuration = model.config DetaImageProcessor class transformers.DetaImageProcessor < source > ( format: typing.Union[str, transformers.models.deta.image_processing_deta.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float]] = None image_std: typing.Union[float, typing.List[float]] = None do_pad: bool = True **kwargs ) Parameters format (str, optional, defaults to "coco_detection") — Data format of the annotations. One of “coco_detection” or “coco_panoptic”. do_resize (bool, optional, defaults to True) — Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}): Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. do_rescale (bool, optional, defaults to True) — Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize — Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) — Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) — Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_std parameter in the preprocess method. do_pad (bool, optional, defaults to True) — Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be overridden by the do_pad parameter in the preprocess method. Constructs a Deformable DETR image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] annotations: typing.Union[typing.List[typing.Dict], typing.List[typing.List[typing.Dict]], NoneType] = None return_segmentation_masks: bool = None masks_path: typing.Union[str, pathlib.Path, NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Union[int, float, NoneType] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_pad: typing.Optional[bool] = None format: typing.Union[str, transformers.models.deta.image_processing_deta.AnnotionFormat, NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. annotations (List[Dict] or List[List[Dict]], optional) — List of annotations associated with the image or batch of images. If annotionation is for object detection, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “annotations” (List[Dict]): List of annotations for an image. Each annotation should be a dictionary. An image can have no annotations, in which case the list should be empty. If annotionation is for segmentation, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary. An image can have no segments, in which case the list should be empty. “file_name” (str): The file name of the image. return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks. masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use when resizing the image. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to use when rescaling the image. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Mean to use when normalizing the image. image_std (float or List[float], optional, defaults to self.image_std) — Standard deviation to use when normalizing the image. do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image. format (str or AnnotionFormat, optional, defaults to self.format) — Format of the annotations. return_tensors (str or TensorType, optional, defaults to self.return_tensors) — Type of tensors to return. If None, will return the list of images. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or a batch of images so that it can be used by the model. post_process_object_detection < source > ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None nms_threshold: float = 0.7 ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional, defaults to 0.5) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If left to None, predictions will not be resized. nms_threshold (float, optional, defaults to 0.7) — NMS threshold. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the output of DetaForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. DetaModel class transformers.DetaModel < source > ( config: DetaConfig ) Parameters config (DetaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DETA Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.deta.modeling_deta.DetaModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.deta.modeling_deta.DetaModelOutput or tuple(torch.FloatTensor) A transformers.models.deta.modeling_deta.DetaModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetaConfig) and inputs. init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder. last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder). intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder). decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background). enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage. The DetaModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DetaModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large-o365") >>> model = DetaModel.from_pretrained("jozhang97/deta-swin-large-o365", two_stage=False) >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 900, 256] DetaForObjectDetection class transformers.DetaForObjectDetection < source > ( config: DetaConfig ) Parameters config (DetaConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DETA Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.deta.modeling_deta.DetaObjectDetectionOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See AutoImageProcessor.__call__() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4). Returns transformers.models.deta.modeling_deta.DetaObjectDetectionOutput or tuple(torch.FloatTensor) A transformers.models.deta.modeling_deta.DetaObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetaConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use ~DetaProcessor.post_process_object_detection to retrieve the unnormalized bounding boxes. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, num_queries, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, num_queries, num_queries). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_queries, num_heads, 4, 4). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_heads, 4, 4). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. intermediate_hidden_states (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, hidden_size)) — Stacked intermediate hidden states (output of each layer of the decoder). intermediate_reference_points (torch.FloatTensor of shape (batch_size, config.decoder_layers, num_queries, 4)) — Stacked intermediate reference points (reference points of each layer of the decoder). init_reference_points (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Initial reference points sent through the Transformer decoder. enc_outputs_class (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels), optional, returned when config.with_box_refine=True and config.two_stage=True) — Predicted bounding boxes scores where the top config.two_stage_num_proposals scoring bounding boxes are picked as region proposals in the first stage. Output of bounding box binary classification (i.e. foreground and background). enc_outputs_coord_logits (torch.FloatTensor of shape (batch_size, sequence_length, 4), optional, returned when config.with_box_refine=True and config.two_stage=True) — Logits of predicted bounding boxes coordinates in the first stage. The DetaForObjectDetection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DetaForObjectDetection >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("jozhang97/deta-swin-large") >>> model = DetaForObjectDetection.from_pretrained("jozhang97/deta-swin-large") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> target_sizes = torch.tensor([image.size[::-1]]) >>> results = image_processor.post_process_object_detection(outputs, threshold=0.5, target_sizes=target_sizes)[ ... 0 ... ] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected cat with confidence 0.683 at location [345.85, 23.68, 639.86, 372.83] Detected cat with confidence 0.683 at location [8.8, 52.49, 316.93, 473.45] Detected remote with confidence 0.568 at location [40.02, 73.75, 175.96, 117.33] Detected remote with confidence 0.546 at location [333.68, 77.13, 370.12, 187.51]
https://huggingface.co/docs/transformers/model_doc/deit
DeiT This is a recently introduced model so the API hasn’t been tested extensively. There may be some bugs or slight breaking changes to fix it in the future. If you see something strange, file a Github Issue. Overview The DeiT model was proposed in Training data-efficient image transformers & distillation through attention by Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou. The Vision Transformer (ViT) introduced in Dosovitskiy et al., 2020 has shown that one can match or even outperform existing convolutional neural networks using a Transformer encoder (BERT-like). However, the ViT models introduced in that paper required training on expensive infrastructure for multiple weeks, using external data. DeiT (data-efficient image transformers) are more efficiently trained transformers for image classification, requiring far less data and far less computing resources compared to the original ViT models. The abstract from the paper is the following: Recently, neural networks purely based on attention were shown to address image understanding tasks such as image classification. However, these visual transformers are pre-trained with hundreds of millions of images using an expensive infrastructure, thereby limiting their adoption. In this work, we produce a competitive convolution-free transformer by training on Imagenet only. We train them on a single computer in less than 3 days. Our reference vision transformer (86M parameters) achieves top-1 accuracy of 83.1% (single-crop evaluation) on ImageNet with no external data. More importantly, we introduce a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teacher through attention. We show the interest of this token-based distillation, especially when using a convnet as a teacher. This leads us to report results competitive with convnets for both Imagenet (where we obtain up to 85.2% accuracy) and when transferring to other tasks. We share our code and models. Tips: Compared to ViT, DeiT models use a so-called distillation token to effectively learn from a teacher (which, in the DeiT paper, is a ResNet like-model). The distillation token is learned through backpropagation, by interacting with the class ([CLS]) and patch tokens through the self-attention layers. There are 2 ways to fine-tune distilled models, either (1) in a classic way, by only placing a prediction head on top of the final hidden state of the class token and not using the distillation signal, or (2) by placing both a prediction head on top of the class token and on top of the distillation token. In that case, the [CLS] prediction head is trained using regular cross-entropy between the prediction of the head and the ground-truth label, while the distillation prediction head is trained using hard distillation (cross-entropy between the prediction of the distillation head and the label predicted by the teacher). At inference time, one takes the average prediction between both heads as final prediction. (2) is also called “fine-tuning with distillation”, because one relies on a teacher that has already been fine-tuned on the downstream dataset. In terms of models, (1) corresponds to DeiTForImageClassification and (2) corresponds to DeiTForImageClassificationWithTeacher. Note that the authors also did try soft distillation for (2) (in which case the distillation prediction head is trained using KL divergence to match the softmax output of the teacher), but hard distillation gave the best results. All released checkpoints were pre-trained and fine-tuned on ImageNet-1k only. No external data was used. This is in contrast with the original ViT model, which used external data like the JFT-300M dataset/Imagenet-21k for pre-training. The authors of DeiT also released more efficiently trained ViT models, which you can directly plug into ViTModel or ViTForImageClassification. Techniques like data augmentation, optimization, and regularization were used in order to simulate training on a much larger dataset (while only using ImageNet-1k for pre-training). There are 4 variants available (in 3 different sizes): facebook/deit-tiny-patch16-224, facebook/deit-small-patch16-224, facebook/deit-base-patch16-224 and facebook/deit-base-patch16-384. Note that one should use DeiTImageProcessor in order to prepare images for the model. This model was contributed by nielsr. The TensorFlow version of this model was added by amyeroberts. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DeiT. Image Classification DeiTForImageClassification is supported by this example script and notebook. See also: Image classification task guide Besides that: DeiTForMaskedImageModeling is supported by this example script. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DeiTConfig class transformers.DeiTConfig < source > ( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-12 image_size = 224 patch_size = 16 num_channels = 3 qkv_bias = True encoder_stride = 16 **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values. encoder_stride (int, optional, defaults to 16) — Factor to increase the spatial resolution by in the decoder head for masked image modeling. This is the configuration class to store the configuration of a DeiTModel. It is used to instantiate an DeiT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DeiT facebook/deit-base-distilled-patch16-224 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DeiTConfig, DeiTModel >>> >>> configuration = DeiTConfig() >>> >>> model = DeiTModel(configuration) >>> >>> configuration = model.config DeiTFeatureExtractor Preprocess an image or a batch of images. DeiTImageProcessor class transformers.DeiTImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = 3 do_center_crop: bool = True crop_size: typing.Dict[str, int] = None rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_rescale: bool = True do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by do_resize in preprocess. size (Dict[str, int] optional, defaults to {"height" -- 256, "width": 256}): Size of the image after resize. Can be overridden by size in preprocess. resample (PILImageResampling filter, optional, defaults to PILImageResampling.BICUBIC) — Resampling filter to use if resizing the image. Can be overridden by resample in preprocess. do_center_crop (bool, optional, defaults to True) — Whether to center crop the image. If the input size is smaller than crop_size along any edge, the image is padded with 0’s and then center cropped. Can be overridden by do_center_crop in preprocess. crop_size (Dict[str, int], optional, defaults to {"height" -- 224, "width": 224}): Desired output size when applying center-cropping. Can be overridden by crop_size in preprocess. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize (bool, optional, defaults to True) — Whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Standard deviation to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_std parameter in the preprocess method. Constructs a DeiT image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None resample = None do_center_crop: bool = None crop_size: typing.Dict[str, int] = None do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: ChannelDimension = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resize. resample (PILImageResampling, optional, defaults to self.resample) — PILImageResampling filter to use if resizing the image Only has an effect if do_resize is set to True. do_center_crop (bool, optional, defaults to self.do_center_crop) — Whether to center crop the image. crop_size (Dict[str, int], optional, defaults to self.crop_size) — Size of the image after center crop. If one edge the image is smaller than crop_size, it will be padded with zeros and then cropped do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image values between [0 - 1]. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: None: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: ChannelDimension.FIRST: image in (num_channels, height, width) format. ChannelDimension.LAST: image in (height, width, num_channels) format. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. DeiTModel class transformers.DeiTModel < source > ( config: DeiTConfig add_pooling_layer: bool = True use_mask_token: bool = False ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DeiT Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches), optional) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DeiTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, DeiTModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = DeiTModel.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 198, 768] DeiTForMaskedImageModeling class transformers.DeiTForMaskedImageModeling < source > ( config: DeiTConfig ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeiT Model with a decoder on top for masked image modeling, as proposed in SimMIM. Note that we provide a script to pre-train this model on custom data in our examples directory. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedImageModelingOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.modeling_outputs.MaskedImageModelingOutput or tuple(torch.FloatTensor) A transformers.modeling_outputs.MaskedImageModelingOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when bool_masked_pos is provided) — Reconstruction loss. reconstruction (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Reconstructed / completed images. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True): Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DeiTForMaskedImageModeling forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DeiTForMaskedImageModeling >>> import torch >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = DeiTForMaskedImageModeling.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> num_patches = (model.config.image_size // model.config.patch_size) ** 2 >>> pixel_values = image_processor(images=image, return_tensors="pt").pixel_values >>> >>> bool_masked_pos = torch.randint(low=0, high=2, size=(1, num_patches)).bool() >>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) >>> loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction >>> list(reconstructed_pixel_values.shape) [1, 3, 224, 224] DeiTForImageClassification class transformers.DeiTForImageClassification < source > ( config: DeiTConfig ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DeiTForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DeiTForImageClassification >>> import torch >>> from PIL import Image >>> import requests >>> torch.manual_seed(3) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> >>> >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = DeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> >>> predicted_class_idx = logits.argmax(-1).item() >>> print("Predicted class:", model.config.id2label[predicted_class_idx]) Predicted class: magpie DeiTForImageClassificationWithTeacher class transformers.DeiTForImageClassificationWithTeacher < source > ( config: DeiTConfig ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning:: This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.deit.modeling_deit.DeiTForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.deit.modeling_deit.DeiTForImageClassificationWithTeacherOutput or tuple(torch.FloatTensor) A transformers.models.deit.modeling_deit.DeiTForImageClassificationWithTeacherOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits. cls_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the class token). distillation_logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the distillation token). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DeiTForImageClassificationWithTeacher forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, DeiTForImageClassificationWithTeacher >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = DeiTForImageClassificationWithTeacher.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat TFDeiTModel class transformers.TFDeiTModel < source > ( *args **kwargs ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DeiT Model transformer outputting raw hidden-states without any specific head on top. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: tf.Tensor | None = None bool_masked_pos: tf.Tensor | None = None head_mask: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDeiTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, TFDeiTModel >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = TFDeiTModel.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> inputs = image_processor(image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 198, 768] TFDeiTForMaskedImageModeling class transformers.TFDeiTForMaskedImageModeling < source > ( *args **kwargs ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeiT Model with a decoder on top for masked image modeling, as proposed in SimMIM. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: tf.Tensor | None = None bool_masked_pos: tf.Tensor | None = None head_mask: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFMaskedImageModelingOutput or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (tf.Tensor of type bool and shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.modeling_tf_outputs.TFMaskedImageModelingOutput or tuple(tf.Tensor) A transformers.modeling_tf_outputs.TFMaskedImageModelingOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. loss (tf.Tensor of shape (1,), optional, returned when bool_masked_pos is provided) — Reconstruction loss. reconstruction (tf.Tensor of shape (batch_size, num_channels, height, width)) — Reconstructed / completed images. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True): Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True): Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDeiTForMaskedImageModeling forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFDeiTForMaskedImageModeling >>> import tensorflow as tf >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = TFDeiTForMaskedImageModeling.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> num_patches = (model.config.image_size // model.config.patch_size) ** 2 >>> pixel_values = image_processor(images=image, return_tensors="tf").pixel_values >>> >>> bool_masked_pos = tf.cast(tf.random.uniform((1, num_patches), minval=0, maxval=2, dtype=tf.int32), tf.bool) >>> outputs = model(pixel_values, bool_masked_pos=bool_masked_pos) >>> loss, reconstructed_pixel_values = outputs.loss, outputs.reconstruction >>> list(reconstructed_pixel_values.shape) [1, 3, 224, 224] TFDeiTForImageClassification class transformers.TFDeiTForImageClassification < source > ( *args **kwargs ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeiT Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: tf.Tensor | None = None head_mask: tf.Tensor | None = None labels: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.modeling_tf_outputs.TFImageClassifierOutput or tuple(tf.Tensor) A transformers.modeling_tf_outputs.TFImageClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. loss (tf.Tensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDeiTForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, TFDeiTForImageClassification >>> import tensorflow as tf >>> from PIL import Image >>> import requests >>> tf.keras.utils.set_random_seed(3) >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> >>> >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = TFDeiTForImageClassification.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> inputs = image_processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> logits = outputs.logits >>> >>> predicted_class_idx = tf.math.argmax(logits, axis=-1)[0] >>> print("Predicted class:", model.config.id2label[int(predicted_class_idx)]) Predicted class: little blue heron, Egretta caerulea TFDeiTForImageClassificationWithTeacher class transformers.TFDeiTForImageClassificationWithTeacher < source > ( *args **kwargs ) Parameters config (DeiTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DeiT Model transformer with image classification heads on top (a linear layer on top of the final hidden state of the [CLS] token and a linear layer on top of the final hidden state of the distillation token) e.g. for ImageNet. .. warning:: This model supports inference-only. Fine-tuning with distillation (i.e. with a teacher) is not yet supported. This model is a TensorFlow tf.keras.layers.Layer. Use it as a regular TensorFlow Module and refer to the TensorFlow documentation for all matter related to general usage and behavior. call < source > ( pixel_values: tf.Tensor | None = None head_mask: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacherOutput or tuple(tf.Tensor) Parameters pixel_values (tf.Tensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DeiTImageProcessor.call() for details. head_mask (tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacherOutput or tuple(tf.Tensor) A transformers.models.deit.modeling_tf_deit.TFDeiTForImageClassificationWithTeacherOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DeiTConfig) and inputs. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores as the average of the cls_logits and distillation logits. cls_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the classification head (i.e. the linear layer on top of the final hidden state of the class token). distillation_logits (tf.Tensor of shape (batch_size, config.num_labels)) — Prediction scores of the distillation head (i.e. the linear layer on top of the final hidden state of the distillation token). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDeiTForImageClassificationWithTeacher forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, TFDeiTForImageClassificationWithTeacher >>> import tensorflow as tf >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> model = TFDeiTForImageClassificationWithTeacher.from_pretrained("facebook/deit-base-distilled-patch16-224") >>> inputs = image_processor(image, return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> predicted_label = int(tf.math.argmax(logits, axis=-1)) >>> print(model.config.id2label[predicted_label]) tabby, tabby cat
https://huggingface.co/docs/transformers/model_doc/dinat
Dilated Neighborhood Attention Transformer Overview DiNAT was proposed in Dilated Neighborhood Attention Transformer by Ali Hassani and Humphrey Shi. It extends NAT by adding a Dilated Neighborhood Attention pattern to capture global context, and shows significant performance improvements over it. The abstract from the paper is the following: Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities, domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have also gained significant attention, thanks to their performance and easy integration into existing frameworks. These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA) or Swin Transformer’s Shifted Window Self Attention. While effective at reducing self attention’s quadratic complexity, local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling, and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and efficient extension to NA that can capture more global context and expand receptive fields exponentially at no additional cost. NA’s local attention and DiNA’s sparse global attention complement each other, and therefore we introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both. DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt. Our large model is faster and ahead of its Swin counterpart by 1.5% box AP in COCO object detection, 1.3% mask AP in COCO instance segmentation, and 1.1% mIoU in ADE20K semantic segmentation. Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.2 PQ) and ADE20K (48.5 PQ), and instance segmentation model on Cityscapes (44.5 AP) and ADE20K (35.4 AP) (no extra data). It also matches the state of the art specialized semantic segmentation models on ADE20K (58.2 mIoU), and ranks second on Cityscapes (84.5 mIoU) (no extra data). Tips: One can use the AutoImageProcessor API to prepare images for the model. DiNAT can be used as a backbone. When output_hidden_states = True, it will output both hidden_states and reshaped_hidden_states. The reshaped_hidden_states have a shape of (batch, num_channels, height, width) rather than (batch_size, height, width, num_channels). Notes: DiNAT depends on NATTEN’s implementation of Neighborhood Attention and Dilated Neighborhood Attention. You can install it with pre-built wheels for Linux by referring to shi-labs.com/natten, or build on your system by running pip install natten. Note that the latter will likely take time to compile. NATTEN does not support Windows devices yet. Patch size of 4 is only supported at the moment. Neighborhood Attention with different dilation values. Taken from the original paper. This model was contributed by Ali Hassani. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiNAT. Image Classification DinatForImageClassification is supported by this example script and notebook. See also: Image classification task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DinatConfig class transformers.DinatConfig < source > ( patch_size = 4 num_channels = 3 embed_dim = 64 depths = [3, 4, 6, 5] num_heads = [2, 4, 8, 16] kernel_size = 7 dilations = [[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]] mlp_ratio = 3.0 qkv_bias = True hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 drop_path_rate = 0.1 hidden_act = 'gelu' initializer_range = 0.02 layer_norm_eps = 1e-05 layer_scale_init_value = 0.0 out_features = None out_indices = None **kwargs ) Parameters patch_size (int, optional, defaults to 4) — The size (resolution) of each patch. NOTE: Only patch size of 4 is supported at the moment. num_channels (int, optional, defaults to 3) — The number of input channels. embed_dim (int, optional, defaults to 64) — Dimensionality of patch embedding. depths (List[int], optional, defaults to [2, 2, 6, 2]) — Number of layers in each level of the encoder. num_heads (List[int], optional, defaults to [3, 6, 12, 24]) — Number of attention heads in each layer of the Transformer encoder. kernel_size (int, optional, defaults to 7) — Neighborhood Attention kernel size. dilations (List[List[int]], optional, defaults to [[1, 8, 1], [1, 4, 1, 4], [1, 2, 1, 2, 1, 2], [1, 1, 1, 1, 1]]) — Dilation value of each NA layer in the Transformer encoder. mlp_ratio (float, optional, defaults to 3.0) — Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (bool, optional, defaults to True) — Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu", "selu" and "gelu_new" are supported. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. layer_scale_init_value (float, optional, defaults to 0.0) — The initial value for the layer scale. Disabled if <=0. out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. This is the configuration class to store the configuration of a DinatModel. It is used to instantiate a Dinat model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Dinat shi-labs/dinat-mini-in1k-224 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DinatConfig, DinatModel >>> >>> configuration = DinatConfig() >>> >>> model = DinatModel(configuration) >>> >>> configuration = model.config DinatModel class transformers.DinatModel < source > ( config add_pooling_layer = True ) Parameters config (DinatConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Dinat Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.dinat.modeling_dinat.DinatModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.dinat.modeling_dinat.DinatModelOutput or tuple(torch.FloatTensor) A transformers.models.dinat.modeling_dinat.DinatModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DinatConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width). Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions. The DinatModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, DinatModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("shi-labs/dinat-mini-in1k-224") >>> model = DinatModel.from_pretrained("shi-labs/dinat-mini-in1k-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 7, 7, 512] DinatForImageClassification class transformers.DinatForImageClassification < source > ( config ) Parameters config (DinatConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Dinat Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.dinat.modeling_dinat.DinatImageClassifierOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See ViTImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). Returns transformers.models.dinat.modeling_dinat.DinatImageClassifierOutput or tuple(torch.FloatTensor) A transformers.models.dinat.modeling_dinat.DinatImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DinatConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width). Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions. The DinatForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, DinatForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("shi-labs/dinat-mini-in1k-224") >>> model = DinatForImageClassification.from_pretrained("shi-labs/dinat-mini-in1k-224") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item() >>> print(model.config.id2label[predicted_label]) tabby, tabby cat
https://huggingface.co/docs/transformers/model_doc/distilbert
DistilBERT Overview The DistilBERT model was proposed in the blog post Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT, and the paper DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. DistilBERT is a small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than bert-base-uncased, runs 60% faster while preserving over 95% of BERT’s performances as measured on the GLUE language understanding benchmark. The abstract from the paper is the following: As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), operating these large models in on-the-edge and/or under constrained computational training or inference budgets remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by 40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device study. Tips: DistilBERT doesn’t have token_type_ids, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token tokenizer.sep_token (or [SEP]). DistilBERT doesn’t have options to select the input positions (position_ids input). This could be added if necessary though, just let us know if you need this option. Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: finding the same probabilities as the teacher model predicting the masked tokens correctly (but no next-sentence objective) a cosine similarity between the hidden states of the student and the teacher model This model was contributed by victorsanh. This model jax version was contributed by kamalkraj. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DistilBERT. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. Text Classification A blog post on Getting Started with Sentiment Analysis using Python with DistilBERT. A blog post on how to train DistilBERT with Blurr for sequence classification. A blog post on how to use Ray to tune DistilBERT hyperparameters. A blog post on how to train DistilBERT with Hugging Face and Amazon SageMaker. A notebook on how to finetune DistilBERT for multi-label classification. 🌎 A notebook on how to finetune DistilBERT for multiclass classification with PyTorch. 🌎 A notebook on how to finetune DistilBERT for text classification in TensorFlow. 🌎 DistilBertForSequenceClassification is supported by this example script and notebook. TFDistilBertForSequenceClassification is supported by this example script and notebook. FlaxDistilBertForSequenceClassification is supported by this example script and notebook. Text classification task guide Token Classification DistilBertForTokenClassification is supported by this example script and notebook. TFDistilBertForTokenClassification is supported by this example script and notebook. FlaxDistilBertForTokenClassification is supported by this example script. Token classification chapter of the 🤗 Hugging Face Course. Token classification task guide Fill-Mask DistilBertForMaskedLM is supported by this example script and notebook. TFDistilBertForMaskedLM is supported by this example script and notebook. FlaxDistilBertForMaskedLM is supported by this example script and notebook. Masked language modeling chapter of the 🤗 Hugging Face Course. Masked language modeling task guide Question Answering DistilBertForQuestionAnswering is supported by this example script and notebook. TFDistilBertForQuestionAnswering is supported by this example script and notebook. FlaxDistilBertForQuestionAnswering is supported by this example script. Question answering chapter of the 🤗 Hugging Face Course. Question answering task guide Multiple choice DistilBertForMultipleChoice is supported by this example script and notebook. TFDistilBertForMultipleChoice is supported by this example script and notebook. Multiple choice task guide ⚗️ Optimization A blog post on how to quantize DistilBERT with 🤗 Optimum and Intel. A blog post on how Optimizing Transformers for GPUs with 🤗 Optimum. A blog post on Optimizing Transformers with Hugging Face Optimum. ⚡️ Inference A blog post on how to Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia with DistilBERT. A blog post on Serverless Inference with Hugging Face’s Transformers, DistilBERT and Amazon SageMaker. 🚀 Deploy A blog post on how to deploy DistilBERT on Google Cloud. A blog post on how to deploy DistilBERT with Amazon SageMaker. A blog post on how to Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module. DistilBertConfig class transformers.DistilBertConfig < source > ( vocab_size = 30522 max_position_embeddings = 512 sinusoidal_pos_embds = False n_layers = 6 n_heads = 12 dim = 768 hidden_dim = 3072 dropout = 0.1 attention_dropout = 0.1 activation = 'gelu' initializer_range = 0.02 qa_dropout = 0.1 seq_classif_dropout = 0.2 pad_token_id = 0 **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the DistilBERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling DistilBertModel or TFDistilBertModel. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). sinusoidal_pos_embds (boolean, optional, defaults to False) — Whether to use sinusoidal positional embeddings. n_layers (int, optional, defaults to 6) — Number of hidden layers in the Transformer encoder. n_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. dim (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. hidden_dim (int, optional, defaults to 3072) — The size of the “intermediate” (often named feed-forward) layer in the Transformer encoder. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. activation (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. qa_dropout (float, optional, defaults to 0.1) — The dropout probabilities used in the question answering model DistilBertForQuestionAnswering. seq_classif_dropout (float, optional, defaults to 0.2) — The dropout probabilities used in the sequence classification and the multiple choice model DistilBertForSequenceClassification. This is the configuration class to store the configuration of a DistilBertModel or a TFDistilBertModel. It is used to instantiate a DistilBERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DistilBERT distilbert-base-uncased architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import DistilBertConfig, DistilBertModel >>> >>> configuration = DistilBertConfig() >>> >>> model = DistilBertModel(configuration) >>> >>> configuration = model.config DistilBertTokenizer class transformers.DistilBertTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. do_basic_tokenize (bool, optional, defaults to True) — Whether or not to do basic tokenization before WordPiece. never_split (Iterable, optional) — Collection of tokens which will never be split during tokenization. Only has an effect when do_basic_tokenize=True unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original BERT). Construct a DistilBERT tokenizer. Based on WordPiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] Converts a sequence of tokens (string) in a single string. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. DistilBertTokenizerFast class transformers.DistilBertTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. do_lower_case (bool, optional, defaults to True) — Whether or not to lowercase the input when tokenizing. unk_token (str, optional, defaults to "[UNK]") — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. sep_token (str, optional, defaults to "[SEP]") — The separator token, which is used when building a sequence from multiple sequences, e.g. two sequences for sequence classification or for a text and a question for question answering. It is also used as the last token of a sequence built with special tokens. pad_token (str, optional, defaults to "[PAD]") — The token used for padding, for example when batching sequences of different lengths. cls_token (str, optional, defaults to "[CLS]") — The classifier token which is used when doing sequence classification (classification of the whole sequence instead of per-token classification). It is the first token of the sequence when built with special tokens. mask_token (str, optional, defaults to "[MASK]") — The token used for masking values. This is the token used when training this model with masked language modeling. This is the token which the model will try to predict. clean_text (bool, optional, defaults to True) — Whether or not to clean the text before tokenization by removing any control characters and replacing all whitespaces by the classic one. tokenize_chinese_chars (bool, optional, defaults to True) — Whether or not to tokenize Chinese characters. This should likely be deactivated for Japanese (see this issue). strip_accents (bool, optional) — Whether or not to strip all accents. If this option is not specified, then it will be determined by the value for lowercase (as in the original BERT). wordpieces_prefix (str, optional, defaults to "##") — The prefix for subwords. Construct a “fast” DistilBERT tokenizer (backed by HuggingFace’s tokenizers library). Based on WordPiece. This tokenizer inherits from PreTrainedTokenizerFast which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. build_inputs_with_special_tokens < source > ( token_ids_0 token_ids_1 = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. A BERT sequence has the following format: single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP] create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. A BERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). DistilBertModel class transformers.DistilBertModel < source > ( config: PretrainedConfig ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DistilBERT encoder/transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DistilBertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DistilBertModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = DistilBertModel.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state DistilBertForMaskedLM class transformers.DistilBertForMaskedLM < source > ( config: PretrainedConfig ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a masked language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DistilBertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DistilBertForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = DistilBertForMaskedLM.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) DistilBertForSequenceClassification class transformers.DistilBertForSequenceClassification < source > ( config: PretrainedConfig ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DistilBertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, DistilBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, DistilBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = DistilBertForSequenceClassification.from_pretrained( ... "distilbert-base-uncased", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss DistilBertForMultipleChoice class transformers.DistilBertForMultipleChoice < source > ( config: PretrainedConfig ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DistilBertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoTokenizer, DistilBertForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased") >>> model = DistilBertForMultipleChoice.from_pretrained("distilbert-base-cased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([[prompt, choice0], [prompt, choice1]], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits DistilBertForTokenClassification class transformers.DistilBertForTokenClassification < source > ( config: PretrainedConfig ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DistilBertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DistilBertForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = DistilBertForTokenClassification.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss DistilBertForQuestionAnswering class transformers.DistilBertForQuestionAnswering < source > ( config: PretrainedConfig ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None start_positions: typing.Optional[torch.Tensor] = None end_positions: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DistilBertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, DistilBertForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = DistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss TFDistilBertModel class transformers.TFDistilBertModel < source > ( *args **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DistilBERT encoder/transformer outputting raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDistilBertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDistilBertModel >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state TFDistilBertForMaskedLM class transformers.TFDistilBertForMaskedLM < source > ( *args **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a masked language modeling head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMaskedLMOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFMaskedLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Masked language modeling (MLM) loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDistilBertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDistilBertForMaskedLM >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = TFDistilBertForMaskedLM.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="tf") >>> logits = model(**inputs).logits >>> >>> mask_token_index = tf.where((inputs.input_ids == tokenizer.mask_token_id)[0]) >>> selected_logits = tf.gather_nd(logits[0], indices=mask_token_index) >>> predicted_token_id = tf.math.argmax(selected_logits, axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="tf")["input_ids"] >>> >>> labels = tf.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) TFDistilBertForSequenceClassification class transformers.TFDistilBertForSequenceClassification < source > ( *args **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFSequenceClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_tf_outputs.TFSequenceClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (tf.Tensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDistilBertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDistilBertForSequenceClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> logits = model(**inputs).logits >>> predicted_class_id = int(tf.math.argmax(logits, axis=-1)[0]) >>> >>> num_labels = len(model.config.id2label) >>> model = TFDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased", num_labels=num_labels) >>> labels = tf.constant(1) >>> loss = model(**inputs, labels=labels).loss TFDistilBertForMultipleChoice class transformers.TFDistilBertForMultipleChoice < source > ( *args **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_tf_outputs.TFMultipleChoiceModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDistilBertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDistilBertForMultipleChoice >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = TFDistilBertForMultipleChoice.from_pretrained("distilbert-base-uncased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="tf", padding=True) >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> outputs = model(inputs) >>> >>> logits = outputs.logits TFDistilBertForTokenClassification class transformers.TFDistilBertForTokenClassification < source > ( *args **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None labels: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFTokenClassifierOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_tf_outputs.TFTokenClassifierOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of unmasked labels, returned when labels is provided) — Classification loss. logits (tf.Tensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDistilBertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDistilBertForTokenClassification >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = TFDistilBertForTokenClassification.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="tf" ... ) >>> logits = model(**inputs).logits >>> predicted_token_class_ids = tf.math.argmax(logits, axis=-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t] for t in predicted_token_class_ids[0].numpy().tolist()] >>> labels = predicted_token_class_ids >>> loss = tf.math.reduce_mean(model(**inputs, labels=labels).loss) TFDistilBertForQuestionAnswering class transformers.TFDistilBertForQuestionAnswering < source > ( *args **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layer on top of the hidden-states output to compute span start logits and span end logits). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None head_mask: np.ndarray | tf.Tensor | None = None inputs_embeds: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None start_positions: np.ndarray | tf.Tensor | None = None end_positions: np.ndarray | tf.Tensor | None = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (tf.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). start_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (tf.Tensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_tf_outputs.TFQuestionAnsweringModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. loss (tf.Tensor of shape (batch_size, ), optional, returned when start_positions and end_positions are provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (tf.Tensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDistilBertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, TFDistilBertForQuestionAnswering >>> import tensorflow as tf >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = TFDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="tf") >>> outputs = model(**inputs) >>> answer_start_index = int(tf.math.argmax(outputs.start_logits, axis=-1)[0]) >>> answer_end_index = int(tf.math.argmax(outputs.end_logits, axis=-1)[0]) >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = tf.constant([14]) >>> target_end_index = tf.constant([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = tf.math.reduce_mean(outputs.loss) FlaxDistilBertModel class transformers.FlaxDistilBertModel < source > ( config: DistilBertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DistilBert Model transformer outputting raw hidden-states without any specific head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None head_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxDistilBertModel >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = FlaxDistilBertModel.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state FlaxDistilBertForMaskedLM class transformers.FlaxDistilBertForMaskedLM < source > ( config: DistilBertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a language modeling head on top. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None head_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxDistilBertForMaskedLM >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = FlaxDistilBertForMaskedLM.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("The capital of France is [MASK].", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxDistilBertForSequenceClassification class transformers.FlaxDistilBertForSequenceClassification < source > ( config: DistilBertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None head_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxSequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. logits (jnp.ndarray of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxDistilBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = FlaxDistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxDistilBertForMultipleChoice class transformers.FlaxDistilBertForMultipleChoice < source > ( config: DistilBertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None head_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxMultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. logits (jnp.ndarray of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxDistilBertForMultipleChoice >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = FlaxDistilBertForMultipleChoice.from_pretrained("distilbert-base-uncased") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="jax", padding=True) >>> outputs = model(**{k: v[None, :] for k, v in encoding.items()}) >>> logits = outputs.logits FlaxDistilBertForTokenClassification class transformers.FlaxDistilBertForTokenClassification < source > ( config: DistilBertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None head_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxTokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. logits (jnp.ndarray of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxDistilBertForTokenClassification >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = FlaxDistilBertForTokenClassification.from_pretrained("distilbert-base-uncased") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="jax") >>> outputs = model(**inputs) >>> logits = outputs.logits FlaxDistilBertForQuestionAnswering class transformers.FlaxDistilBertForQuestionAnswering < source > ( config: DistilBertConfig input_shape: typing.Tuple = (1, 1) seed: int = 0 dtype: dtype = <class 'jax.numpy.float32'> _do_init: bool = True **kwargs ) Parameters config (DistilBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DistilBert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from FlaxPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading, saving and converting weights from PyTorch models) This model is also a Flax Linen flax.linen.Module subclass. Use it as a regular Flax linen Module and refer to the Flax documentation for all matter related to general usage and behavior. Finally, this model supports inherent JAX features such as: Just-In-Time (JIT) compilation Automatic Differentiation Vectorization Parallelization __call__ < source > ( input_ids attention_mask = None head_mask = None params: dict = None dropout_rng: PRNGKey = None train: bool = False output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (numpy.ndarray of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (numpy.ndarray of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_flax_outputs.FlaxQuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DistilBertConfig) and inputs. start_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (jnp.ndarray of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(jnp.ndarray), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of jnp.ndarray (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(jnp.ndarray), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of jnp.ndarray (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The FlaxDistilBertPreTrainedModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, FlaxDistilBertForQuestionAnswering >>> tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") >>> model = FlaxDistilBertForQuestionAnswering.from_pretrained("distilbert-base-uncased") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="jax") >>> outputs = model(**inputs) >>> start_scores = outputs.start_logits >>> end_scores = outputs.end_logits
https://huggingface.co/docs/transformers/model_doc/dinov2
DINOv2 Overview The DINOv2 model was proposed in DINOv2: Learning Robust Visual Features without Supervision by Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, Piotr Bojanowski. DINOv2 is an upgrade of DINO, a self-supervised method applied on Vision Transformers. This method enables all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. The abstract from the paper is the following: The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision. These models could greatly simplify the use of images in any system by producing all-purpose visual features, i.e., features that work across image distributions and tasks without finetuning. This work shows that existing pretraining methods, especially self-supervised methods, can produce such features if trained on enough curated data from diverse sources. We revisit existing approaches and combine different techniques to scale our pretraining in terms of data and model size. Most of the technical contributions aim at accelerating and stabilizing the training at scale. In terms of data, we propose an automatic pipeline to build a dedicated, diverse, and curated image dataset instead of uncurated data, as typically done in the self-supervised literature. In terms of models, we train a ViT model (Dosovitskiy et al., 2020) with 1B parameters and distill it into a series of smaller models that surpass the best available all-purpose features, OpenCLIP (Ilharco et al., 2021) on most of the benchmarks at image and pixel levels. Tips: One can use AutoImageProcessor class to prepare images for the model. This model was contributed by nielsr. The original code can be found here. Dinov2Config class transformers.Dinov2Config < source > ( hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 mlp_ratio = 4 hidden_act = 'gelu' hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 initializer_range = 0.02 layer_norm_eps = 1e-06 image_size = 224 patch_size = 16 num_channels = 3 qkv_bias = True layerscale_value = 1.0 drop_path_rate = 0.0 use_swiglu_ffn = False out_features = None out_indices = None apply_layernorm = True reshape_hidden_states = True **kwargs ) Parameters hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. mlp_ratio (int, optional, defaults to 4) — Ratio of the hidden size of the MLPs relative to the hidden_size. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-6) — The epsilon used by the layer normalization layers. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. qkv_bias (bool, optional, defaults to True) — Whether to add a bias to the queries, keys and values. layerscale_value (float, optional, defaults to 1.0) — Initial value to use for layer scale. drop_path_rate (float, optional, defaults to 0.0) — Stochastic depth rate per sample (when applied in the main path of residual layers). use_swiglu_ffn (bool, optional, defaults to False) — Whether to use the SwiGLU feedforward neural network. out_features (List[str], optional) — If used as backbone, list of features to output. Can be any of "stem", "stage1", "stage2", etc. (depending on how many stages the model has). If unset and out_indices is set, will default to the corresponding stages. If unset and out_indices is unset, will default to the last stage. out_indices (List[int], optional) — If used as backbone, list of indices of features to output. Can be any of 0, 1, 2, etc. (depending on how many stages the model has). If unset and out_features is set, will default to the corresponding stages. If unset and out_features is unset, will default to the last stage. apply_layernorm (bool, optional, defaults to True) — Whether to apply layer normalization to the feature maps in case the model is used as backbone. reshape_hidden_states (bool, optional, defaults to True) — Whether to reshape the feature maps to 4D tensors of shape (batch_size, hidden_size, height, width) in case the model is used as backbone. If False, the feature maps will be 3D tensors of shape (batch_size, seq_len, hidden_size). This is the configuration class to store the configuration of a Dinov2Model. It is used to instantiate an Dinov2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Dinov2 google/dinov2-base-patch16-224 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import Dinov2Config, Dinov2Model >>> >>> configuration = Dinov2Config() >>> >>> model = Dinov2Model(configuration) >>> >>> configuration = model.config Dinov2Model class transformers.Dinov2Model < source > ( config: Dinov2Config ) Parameters config (Dinov2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DINOv2 Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None bool_masked_pos: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BitImageProcessor.preprocess() for details. bool_masked_pos (torch.BoolTensor of shape (batch_size, sequence_length)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Only relevant for pre-training. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Dinov2Config) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Dinov2Model forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, Dinov2Model >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/dinov2-base") >>> model = Dinov2Model.from_pretrained("facebook/dinov2-base") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 257, 768] Dinov2ForImageClassification class transformers.Dinov2ForImageClassification < source > ( config: Dinov2Config ) Parameters config (Dinov2Config) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Dinov2 Model transformer with an image classification head on top (a linear layer on top of the final hidden state of the [CLS] token) e.g. for ImageNet. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.ImageClassifierOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See BitImageProcessor.preprocess() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the image classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.ImageClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (Dinov2Config) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states (also called feature maps) of the model at the output of each stage. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, patch_size, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The Dinov2ForImageClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, Dinov2ForImageClassification >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("facebook/dinov2-base") >>> model = Dinov2ForImageClassification.from_pretrained("facebook/dinov2-base") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> predicted_label = logits.argmax(-1).item()
https://huggingface.co/docs/transformers/model_doc/detr
DETR Overview The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs. The abstract from the paper is the following: We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. This model was contributed by nielsr. The original code can be found here. Here’s a TLDR explaining how DetrForObjectDetection works: First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use ResNet-50/ResNet-101). Let’s assume we also add a batch dimension. This means that the input to the backbone is a tensor of shape (batch_size, 3, height, width), assuming the image has 3 color channels (RGB). The CNN backbone outputs a new lower-resolution feature map, typically of shape (batch_size, 2048, height/32, width/32). This is then projected to match the hidden dimension of the Transformer of DETR, which is 256 by default, using a nn.Conv2D layer. So now, we have a tensor of shape (batch_size, 256, height/32, width/32). Next, the feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) = (batch_size, width/32*height/32, 256). So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller d_model (which in NLP is typically 768 or higher). Next, this is sent through the encoder, outputting encoder_hidden_states of the same shape (you can consider these as image features). Next, so-called object queries are sent through the decoder. This is a tensor of shape (batch_size, num_queries, d_model), with num_queries typically set to 100 and initialized with zeros. These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to the encoder, they are added to the input of each attention layer. Each object query will look for a particular object in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or “no object”, and a MLP to predict bounding boxes for each query. The model is trained using a bipartite matching loss: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a “no object” as class and “no bounding box” as bounding box). The Hungarian matching algorithm is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model. DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). DetrForSegmentation adds a segmentation mask head on top of DetrForObjectDetection. The mask head can be trained either jointly, or in a two steps process, where one first trains a DetrForObjectDetection model to detect bounding boxes around both “things” (instances) and “stuff” (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes. Tips: DETR uses so-called object queries to detect objects in an image. The number of queries determines the maximum number of objects that can be detected in a single image, and is set to 100 by default (see parameter num_queries of DetrConfig). Note that it’s good to have some slack (in COCO, the authors used 100, while the maximum number of objects in a COCO image is ~70). The decoder of DETR updates the query embeddings in parallel. This is different from language models like GPT-2, which use autoregressive decoding instead of parallel. Hence, no causal attention mask is used. DETR adds position embeddings to the hidden states at each self-attention and cross-attention layer before projecting to queries and keys. For the position embeddings of the image, one can choose between fixed sinusoidal or learned absolute position embeddings. By default, the parameter position_embedding_type of DetrConfig is set to "sine". During training, the authors of DETR did find it helpful to use auxiliary losses in the decoder, especially to help the model output the correct number of objects of each class. If you set the parameter auxiliary_loss of DetrConfig to True, then prediction feedforward neural networks and Hungarian losses are added after each decoder layer (with the FFNs sharing parameters). If you want to train the model in a distributed environment across multiple nodes, then one should update the num_boxes variable in the DetrLoss class of modeling_detr.py. When training on multiple nodes, this should be set to the average number of target boxes across all nodes, as can be seen in the original implementation here. DetrForObjectDetection and DetrForSegmentation can be initialized with any convolutional backbone available in the timm library. Initializing with a MobileNet backbone for example can be done by setting the backbone attribute of DetrConfig to "tf_mobilenetv3_small_075", and then initializing the model with that config. DETR resizes the input images such that the shortest side is at least a certain amount of pixels while the longest is at most 1333 pixels. At training time, scale augmentation is used such that the shortest side is randomly set to at least 480 and at most 800 pixels. At inference time, the shortest side is set to 800. One can use DetrImageProcessor to prepare images (and optional annotations in COCO format) for the model. Due to this resizing, images in a batch can have different sizes. DETR solves this by padding images up to the largest size in a batch, and by creating a pixel mask that indicates which pixels are real/which are padding. Alternatively, one can also define a custom collate_fn in order to batch images together, using ~transformers.DetrImageProcessor.pad_and_create_pixel_mask. The size of the images will determine the amount of memory being used, and will thus determine the batch_size. It is advised to use a batch size of 2 per GPU. See this Github thread for more info. There are three ways to instantiate a DETR model (depending on what you prefer): Option 1: Instantiate DETR with pre-trained weights for entire model >>> from transformers import DetrForObjectDetection >>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50") Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone >>> from transformers import DetrConfig, DetrForObjectDetection >>> config = DetrConfig() >>> model = DetrForObjectDetection(config) Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer >>> config = DetrConfig(use_pretrained_backbone=False) >>> model = DetrForObjectDetection(config) As a summary, consider the following table: Task Object detection Instance segmentation Panoptic segmentation Description Predicting bounding boxes and class labels around objects in an image Predicting masks around objects (i.e. instances) in an image Predicting masks around both objects (i.e. instances) as well as “stuff” (i.e. background things like trees and roads) in an image Model DetrForObjectDetection DetrForSegmentation DetrForSegmentation Example dataset COCO detection COCO detection, COCO panoptic COCO panoptic Format of annotations to provide to DetrImageProcessor {‘image_id’: int, ‘annotations’: List[Dict]} each Dict being a COCO object annotation {‘image_id’: int, ‘annotations’: List[Dict]} (in case of COCO detection) or {‘file_name’: str, ‘image_id’: int, ‘segments_info’: List[Dict]} (in case of COCO panoptic) {‘file_name’: str, ‘image_id’: int, ‘segments_info’: List[Dict]} and masks_path (path to directory containing PNG files of the masks) Postprocessing (i.e. converting the output of the model to COCO API) post_process() post_process_segmentation() post_process_segmentation(), post_process_panoptic() evaluators CocoEvaluator with iou_types="bbox" CocoEvaluator with iou_types="bbox" or "segm" CocoEvaluator with iou_tupes="bbox" or "segm", PanopticEvaluator In short, one should prepare the data either in COCO detection or COCO panoptic format, then use DetrImageProcessor to create pixel_values, pixel_mask and optional labels, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the outputs of the model using one of the postprocessing methods of DetrImageProcessor. These can be be provided to either CocoEvaluator or PanopticEvaluator, which allow you to calculate metrics like mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the original repository. See the example notebooks for more info regarding evaluation. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR. Object Detection All example notebooks illustrating fine-tuning DetrForObjectDetection and DetrForSegmentation on a custom dataset an be found here. See also: Object detection task guide If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. DETR specific outputs class transformers.models.detr.modeling_detr.DetrModelOutput < source > ( last_hidden_state: FloatTensor = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None intermediate_hidden_states: typing.Optional[torch.FloatTensor] = None ) Parameters last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a layernorm. Base class for outputs of the DETR encoder-decoder model. This class adds one attribute to Seq2SeqModelOutput, namely an optional stack of intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a layernorm. This is useful when training the model with auxiliary decoding losses. class transformers.models.detr.modeling_detr.DetrObjectDetectionOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None loss_dict: typing.Optional[typing.Dict] = None logits: FloatTensor = None pred_boxes: FloatTensor = None auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None last_hidden_state: typing.Optional[torch.FloatTensor] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized bounding boxes. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of DetrForObjectDetection. class transformers.models.detr.modeling_detr.DetrSegmentationOutput < source > ( loss: typing.Optional[torch.FloatTensor] = None loss_dict: typing.Optional[typing.Dict] = None logits: FloatTensor = None pred_boxes: FloatTensor = None pred_masks: FloatTensor = None auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None last_hidden_state: typing.Optional[torch.FloatTensor] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized bounding boxes. pred_masks (torch.FloatTensor of shape (batch_size, num_queries, height/4, width/4)) — Segmentation masks logits for all queries. See also post_process_semantic_segmentation() or post_process_instance_segmentation() post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic segmentation masks respectively. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. Output type of DetrForSegmentation. DetrConfig class transformers.DetrConfig < source > ( use_timm_backbone = True backbone_config = None num_channels = 3 num_queries = 100 encoder_layers = 6 encoder_ffn_dim = 2048 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 2048 decoder_attention_heads = 8 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 init_xavier_std = 1.0 auxiliary_loss = False position_embedding_type = 'sine' backbone = 'resnet50' use_pretrained_backbone = True dilation = False class_cost = 1 bbox_cost = 5 giou_cost = 2 mask_loss_coefficient = 1 dice_loss_coefficient = 1 bbox_loss_coefficient = 5 giou_loss_coefficient = 2 eos_coefficient = 0.1 **kwargs ) Parameters use_timm_backbone (bool, optional, defaults to True) — Whether or not to use the timm library for the backbone. If set to False, will use the AutoBackbone API. backbone_config (PretrainedConfig or dict, optional) — The configuration of the backbone model. Only used in case use_timm_backbone is set to False in which case it will default to ResNetConfig(). num_channels (int, optional, defaults to 3) — The number of input channels. num_queries (int, optional, defaults to 100) — Number of object queries, i.e. detection slots. This is the maximal number of objects DetrModel can detect in a single image. For COCO, we recommend 100 queries. d_model (int, optional, defaults to 256) — Dimension of the layers. encoder_layers (int, optional, defaults to 6) — Number of encoder layers. decoder_layers (int, optional, defaults to 6) — Number of decoder layers. encoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer encoder. decoder_attention_heads (int, optional, defaults to 8) — Number of attention heads for each attention layer in the Transformer decoder. decoder_ffn_dim (int, optional, defaults to 2048) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. encoder_ffn_dim (int, optional, defaults to 2048) — Dimension of the “intermediate” (often named feed-forward) layer in decoder. activation_function (str or function, optional, defaults to "relu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. activation_dropout (float, optional, defaults to 0.0) — The dropout ratio for activations inside the fully connected layer. init_std (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. init_xavier_std (float, optional, defaults to 1) — The scaling factor used for the Xavier initialization gain in the HM Attention map module. encoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. decoder_layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the decoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. auxiliary_loss (bool, optional, defaults to False) — Whether auxiliary decoding losses (loss at each decoder layer) are to be used. position_embedding_type (str, optional, defaults to "sine") — Type of position embeddings to be used on top of the image features. One of "sine" or "learned". backbone (str, optional, defaults to "resnet50") — Name of convolutional backbone to use in case use_timm_backbone = True. Supports any convolutional backbone from the timm package. For a list of all available models, see this page. use_pretrained_backbone (bool, optional, defaults to True) — Whether to use pretrained weights for the backbone. Only supported when use_timm_backbone = True. dilation (bool, optional, defaults to False) — Whether to replace stride with dilation in the last convolutional block (DC5). Only supported when use_timm_backbone = True. class_cost (float, optional, defaults to 1) — Relative weight of the classification error in the Hungarian matching cost. bbox_cost (float, optional, defaults to 5) — Relative weight of the L1 error of the bounding box coordinates in the Hungarian matching cost. giou_cost (float, optional, defaults to 2) — Relative weight of the generalized IoU loss of the bounding box in the Hungarian matching cost. mask_loss_coefficient (float, optional, defaults to 1) — Relative weight of the Focal loss in the panoptic segmentation loss. dice_loss_coefficient (float, optional, defaults to 1) — Relative weight of the DICE/F-1 loss in the panoptic segmentation loss. bbox_loss_coefficient (float, optional, defaults to 5) — Relative weight of the L1 bounding box loss in the object detection loss. giou_loss_coefficient (float, optional, defaults to 2) — Relative weight of the generalized IoU loss in the object detection loss. eos_coefficient (float, optional, defaults to 0.1) — Relative classification weight of the ‘no-object’ class in the object detection loss. This is the configuration class to store the configuration of a DetrModel. It is used to instantiate a DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DETR facebook/detr-resnet-50 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Examples: >>> from transformers import DetrConfig, DetrModel >>> >>> configuration = DetrConfig() >>> >>> model = DetrModel(configuration) >>> >>> configuration = model.config from_backbone_config < source > ( backbone_config: PretrainedConfig **kwargs ) → DetrConfig Parameters backbone_config (PretrainedConfig) — The backbone configuration. An instance of a configuration object Instantiate a DetrConfig (or a derived class) from a pre-trained backbone model configuration. DetrImageProcessor class transformers.DetrImageProcessor < source > ( format: typing.Union[str, transformers.models.detr.image_processing_detr.AnnotionFormat] = <AnnotionFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float]] = None image_std: typing.Union[float, typing.List[float]] = None do_pad: bool = True **kwargs ) Parameters format (str, optional, defaults to "coco_detection") — Data format of the annotations. One of “coco_detection” or “coco_panoptic”. do_resize (bool, optional, defaults to True) — Controls whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by the do_resize parameter in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 800, "longest_edge": 1333}): Size of the image’s (height, width) dimensions after resizing. Can be overridden by the size parameter in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. do_rescale (bool, optional, defaults to True) — Controls whether to rescale the image by the specified scale rescale_factor. Can be overridden by the do_rescale parameter in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by the rescale_factor parameter in the preprocess method. do_normalize — Controls whether to normalize the image. Can be overridden by the do_normalize parameter in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_DEFAULT_MEAN) — Mean values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_DEFAULT_STD) — Standard deviation values to use when normalizing the image. Can be a single value or a list of values, one for each channel. Can be overridden by the image_std parameter in the preprocess method. do_pad (bool, optional, defaults to True) — Controls whether to pad the image to the largest image in a batch and create a pixel mask. Can be overridden by the do_pad parameter in the preprocess method. Constructs a Detr image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] annotations: typing.Union[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]], typing.List[typing.Dict[str, typing.Union[int, str, typing.List[typing.Dict]]]], NoneType] = None return_segmentation_masks: bool = None masks_path: typing.Union[str, pathlib.Path, NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Union[typing.Dict[str, int], NoneType] = None resample = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Union[int, float, NoneType] = None do_normalize: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_pad: typing.Optional[bool] = None format: typing.Union[str, transformers.models.detr.image_processing_detr.AnnotionFormat, NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image or batch of images to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. annotations (AnnotationType or List[AnnotationType], optional) — List of annotations associated with the image or batch of images. If annotation is for object detection, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “annotations” (List[Dict]): List of annotations for an image. Each annotation should be a dictionary. An image can have no annotations, in which case the list should be empty. If annotation is for segmentation, the annotations should be a dictionary with the following keys: “image_id” (int): The image id. “segments_info” (List[Dict]): List of segments for an image. Each segment should be a dictionary. An image can have no segments, in which case the list should be empty. “file_name” (str): The file name of the image. return_segmentation_masks (bool, optional, defaults to self.return_segmentation_masks) — Whether to return segmentation masks. masks_path (str or pathlib.Path, optional) — Path to the directory containing the segmentation masks. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. resample (PILImageResampling, optional, defaults to self.resample) — Resampling filter to use when resizing the image. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to use when rescaling the image. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Mean to use when normalizing the image. image_std (float or List[float], optional, defaults to self.image_std) — Standard deviation to use when normalizing the image. do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image. format (str or AnnotionFormat, optional, defaults to self.format) — Format of the annotations. return_tensors (str or TensorType, optional, defaults to self.return_tensors) — Type of tensors to return. If None, will return the list of images. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: Use the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or a batch of images so that it can be used by the model. post_process_object_detection < source > ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. post_process_semantic_segmentation < source > ( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor] Parameters outputs (DetrForSegmentation) — Raw outputs of the model. target_sizes (List[Tuple[int, int]], optional) — A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized. Returns List[torch.Tensor] A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id. Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch. post_process_instance_segmentation < source > ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict] Parameters outputs (DetrForSegmentation) — Raw outputs of the model. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. return_coco_annotation (bool, optional) — Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to True. Set to None if no mask if found above threshold. segments_info — A dictionary that contains additional information on each segment. id — An integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. score — Prediction score of segment with segment_id. Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch. post_process_panoptic_segmentation < source > ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None ) → List[Dict] Parameters outputs (DetrForSegmentation) — The outputs from DetrForSegmentation. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. label_ids_to_fuse (Set[int], optional) — The labels in this state will have all their instances be fused together. For instance we could say there can only be one sky in an image, but several persons, so the label ID for sky would be in that set, but not the one for person. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction in batch. If unset, predictions will not be resized. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to the corresponding target_sizes entry. segments_info — A dictionary that contains additional information on each segment. id — an integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise. Multiple instances of the same class / label were fused and assigned a single segment_id. score — Prediction score of segment with segment_id. Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch. DetrFeatureExtractor Preprocess an image or a batch of images. ( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None ) → List[Dict] Parameters outputs (DetrObjectDetectionOutput) — Raw outputs of the model. threshold (float, optional) — Score threshold to keep object detection predictions. target_sizes (torch.Tensor or List[Tuple[int, int]], optional) — Tensor of shape (batch_size, 2) or list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized. A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model. Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch. ( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor] Parameters outputs (DetrForSegmentation) — Raw outputs of the model. target_sizes (List[Tuple[int, int]], optional) — A list of tuples (Tuple[int, int]) containing the target size (height, width) of each image in the batch. If unset, predictions will not be resized. A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of each torch.Tensor correspond to a semantic class id. Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch. ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict] Parameters outputs (DetrForSegmentation) — Raw outputs of the model. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction. If unset, predictions will not be resized. return_coco_annotation (bool, optional) — Defaults to False. If set to True, segmentation maps are returned in COCO run-length encoding (RLE) format. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — A tensor of shape (height, width) where each pixel represents a segment_id or List[List] run-length encoding (RLE) of the segmentation map if return_coco_annotation is set to True. Set to None if no mask if found above threshold. segments_info — A dictionary that contains additional information on each segment. id — An integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. score — Prediction score of segment with segment_id. Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch. ( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Union[typing.List[typing.Tuple[int, int]], NoneType] = None ) → List[Dict] Parameters outputs (DetrForSegmentation) — The outputs from DetrForSegmentation. threshold (float, optional, defaults to 0.5) — The probability score threshold to keep predicted instance masks. mask_threshold (float, optional, defaults to 0.5) — Threshold to use when turning the predicted masks into binary values. overlap_mask_area_threshold (float, optional, defaults to 0.8) — The overlap mask area threshold to merge or discard small disconnected parts within each binary instance mask. label_ids_to_fuse (Set[int], optional) — The labels in this state will have all their instances be fused together. For instance we could say there can only be one sky in an image, but several persons, so the label ID for sky would be in that set, but not the one for person. target_sizes (List[Tuple], optional) — List of length (batch_size), where each list item (Tuple[int, int]]) corresponds to the requested final size (height, width) of each prediction in batch. If unset, predictions will not be resized. A list of dictionaries, one per image, each dictionary containing two keys: segmentation — a tensor of shape (height, width) where each pixel represents a segment_id or None if no mask if found above threshold. If target_sizes is specified, segmentation is resized to the corresponding target_sizes entry. segments_info — A dictionary that contains additional information on each segment. id — an integer representing the segment_id. label_id — An integer representing the label / semantic class id corresponding to segment_id. was_fused — a boolean, True if label_id was in label_ids_to_fuse, False otherwise. Multiple instances of the same class / label were fused and assigned a single segment_id. score — Prediction score of segment with segment_id. Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch. DetrModel class transformers.DetrModel < source > ( config: DetrConfig ) Parameters config (DetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.detr.modeling_detr.DetrModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See DetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.models.detr.modeling_detr.DetrModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetrConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. intermediate_hidden_states (torch.FloatTensor of shape (config.decoder_layers, batch_size, sequence_length, hidden_size), optional, returned when config.auxiliary_loss=True) — Intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a layernorm. The DetrModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DetrModel >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") >>> model = DetrModel.from_pretrained("facebook/detr-resnet-50") >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> >>> outputs = model(**inputs) >>> >>> >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 100, 256] DetrForObjectDetection class transformers.DetrForObjectDetection < source > ( config: DetrConfig ) Parameters config (DetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See DetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss. List of dicts, each dictionary containing at least the following 2 keys: ‘class_labels’ and ‘boxes’ (the class labels and bounding boxes of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,) and the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4). A transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetrConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized bounding boxes. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxilary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The DetrForObjectDetection forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import AutoImageProcessor, DetrForObjectDetection >>> import torch >>> from PIL import Image >>> import requests >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") >>> model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50") >>> inputs = image_processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> >>> target_sizes = torch.tensor([image.size[::-1]]) >>> results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[ ... 0 ... ] >>> for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98] Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66] Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76] Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93] Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72] DetrForSegmentation class transformers.DetrForSegmentation < source > ( config: DetrConfig ) Parameters config (DetrConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top, for tasks such as COCO panoptic. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.detr.modeling_detr.DetrSegmentationOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See DetrImageProcessor.call() for details. pixel_mask (torch.LongTensor of shape (batch_size, height, width), optional) — Mask to avoid performing attention on padding pixel values. Mask values selected in [0, 1]: 1 for pixels that are real (i.e. not masked), 0 for pixels that are padding (i.e. masked). What are attention masks? decoder_attention_mask (torch.FloatTensor of shape (batch_size, num_queries), optional) — Not used by default. Can be used to mask object queries. encoder_outputs (tuple(tuple(torch.FloatTensor), optional) — Tuple consists of (last_hidden_state, optional: hidden_states, optional: attentions) last_hidden_state of shape (batch_size, sequence_length, hidden_size), optional) is a sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention of the decoder. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing the flattened feature map (output of the backbone + projection layer), you can choose to directly pass a flattened representation of an image. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, num_queries, hidden_size), optional) — Optionally, instead of initializing the queries with a tensor of zeros, you can choose to directly pass an embedded representation. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (List[Dict] of len (batch_size,), optional) — Labels for computing the bipartite matching loss, DICE/F-1 loss and Focal loss. List of dicts, each dictionary containing at least the following 3 keys: ‘class_labels’, ‘boxes’ and ‘masks’ (the class labels, bounding boxes and segmentation masks of an image in the batch respectively). The class labels themselves should be a torch.LongTensor of len (number of bounding boxes in the image,), the boxes a torch.FloatTensor of shape (number of bounding boxes in the image, 4) and the masks a torch.FloatTensor of shape (number of bounding boxes in the image, height, width). A transformers.models.detr.modeling_detr.DetrSegmentationOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetrConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels are provided)) — Total loss as a linear combination of a negative log-likehood (cross-entropy) for class prediction and a bounding box loss. The latter is defined as a linear combination of the L1 loss and the generalized scale-invariant IoU loss. loss_dict (Dict, optional) — A dictionary containing the individual losses. Useful for logging. logits (torch.FloatTensor of shape (batch_size, num_queries, num_classes + 1)) — Classification logits (including no-object) for all queries. pred_boxes (torch.FloatTensor of shape (batch_size, num_queries, 4)) — Normalized boxes coordinates for all queries, represented as (center_x, center_y, width, height). These values are normalized in [0, 1], relative to the size of each individual image in the batch (disregarding possible padding). You can use post_process_object_detection() to retrieve the unnormalized bounding boxes. pred_masks (torch.FloatTensor of shape (batch_size, num_queries, height/4, width/4)) — Segmentation masks logits for all queries. See also post_process_semantic_segmentation() or post_process_instance_segmentation() post_process_panoptic_segmentation() to evaluate semantic, instance and panoptic segmentation masks respectively. auxiliary_outputs (list[Dict], optional) — Optional, only returned when auxiliary losses are activated (i.e. config.auxiliary_loss is set to True) and labels are provided. It is a list of dictionaries containing the two above keys (logits and pred_boxes) for each decoder layer. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the decoder of the model. decoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the decoder at the output of each layer plus the initial embedding outputs. decoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder, after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. encoder_last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Sequence of hidden-states at the output of the last layer of the encoder of the model. encoder_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the encoder at the output of each layer plus the initial embedding outputs. encoder_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the encoder, after the attention softmax, used to compute the weighted average in the self-attention heads. The DetrForSegmentation forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> import io >>> import requests >>> from PIL import Image >>> import torch >>> import numpy >>> from transformers import AutoImageProcessor, DetrForSegmentation >>> from transformers.image_transforms import rgb_to_id >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic") >>> model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic") >>> >>> inputs = image_processor(images=image, return_tensors="pt") >>> >>> outputs = model(**inputs) >>> >>> >>> result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[(300, 500)]) >>> >>> panoptic_seg = result[0]["segmentation"] >>> >>> panoptic_segments_info = result[0]["segments_info"]
https://huggingface.co/docs/transformers/model_doc/dit
DiT Overview DiT was proposed in DiT: Self-supervised Pre-training for Document Image Transformer by Junlong Li, Yiheng Xu, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei. DiT applies the self-supervised objective of BEiT (BERT pre-training of Image Transformers) to 42 million document images, allowing for state-of-the-art results on tasks including: document image classification: the RVL-CDIP dataset (a collection of 400,000 images belonging to one of 16 classes). document layout analysis: the PubLayNet dataset (a collection of more than 360,000 document images constructed by automatically parsing PubMed XML files). table detection: the ICDAR 2019 cTDaR dataset (a collection of 600 training images and 240 testing images). The abstract from the paper is the following: Image Transformer has recently achieved significant progress for natural image understanding, either using supervised (ViT, DeiT, etc.) or self-supervised (BEiT, MAE, etc.) pre-training techniques. In this paper, we propose DiT, a self-supervised pre-trained Document Image Transformer model using large-scale unlabeled text images for Document AI tasks, which is essential since no supervised counterparts ever exist due to the lack of human labeled document images. We leverage DiT as the backbone network in a variety of vision-based Document AI tasks, including document image classification, document layout analysis, as well as table detection. Experiment results have illustrated that the self-supervised pre-trained DiT model achieves new state-of-the-art results on these downstream tasks, e.g. document image classification (91.11 → 92.69), document layout analysis (91.0 → 94.9) and table detection (94.23 → 96.55). Summary of the approach. Taken from the [original paper](https://arxiv.org/abs/2203.02378). One can directly use the weights of DiT with the AutoModel API: from transformers import AutoModel model = AutoModel.from_pretrained("microsoft/dit-base") This will load the model pre-trained on masked image modeling. Note that this won’t include the language modeling head on top, used to predict visual tokens. To include the head, you can load the weights into a BeitForMaskedImageModeling model, like so: from transformers import BeitForMaskedImageModeling model = BeitForMaskedImageModeling.from_pretrained("microsoft/dit-base") You can also load a fine-tuned model from the hub, like so: from transformers import AutoModelForImageClassification model = AutoModelForImageClassification.from_pretrained("microsoft/dit-base-finetuned-rvlcdip") This particular checkpoint was fine-tuned on RVL-CDIP, an important benchmark for document image classification. A notebook that illustrates inference for document image classification can be found here. As DiT’s architecture is equivalent to that of BEiT, one can refer to BEiT’s documentation page for all tips, code examples and notebooks. This model was contributed by nielsr. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DiT. Image Classification BeitForImageClassification is supported by this example script and notebook. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
https://huggingface.co/docs/transformers/model_doc/gptsan-japanese
GPTSAN-japanese Overview The GPTSAN-japanese model was released in the repository by Toshiyuki Sakamoto (tanreinama). GPTSAN is a Japanese language model using Switch Transformer. It has the same structure as the model introduced as Prefix LM in the T5 paper, and support both Text Generation and Masked Language Modeling tasks. These basic tasks similarly can fine-tune for translation or summarization. Generation The generate() method can be used to generate text using GPTSAN-Japanese model. >>> from transformers import AutoModel, AutoTokenizer >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").cuda() >>> x_tok = tokenizer("は、", prefix_text="織田信長", return_tensors="pt") >>> torch.manual_seed(0) >>> gen_tok = model.generate(x_tok.input_ids.cuda(), token_type_ids=x_tok.token_type_ids.cuda(), max_new_tokens=20) >>> tokenizer.decode(gen_tok[0]) '織田信長は、2004年に『戦国BASARA』のために、豊臣秀吉' GPTSAN Features GPTSAN has some unique features. It has a model structure of Prefix-LM. It works as a shifted Masked Language Model for Prefix Input tokens. Un-prefixed inputs behave like normal generative models. The Spout vector is a GPTSAN specific input. Spout is pre-trained with random inputs, but you can specify a class of text or an arbitrary vector during fine-tuning. This allows you to indicate the tendency of the generated text. GPTSAN has a sparse Feed Forward based on Switch-Transformer. You can also add other layers and train them partially. See the original GPTSAN repository for details. Prefix-LM Model GPTSAN has the structure of the model named Prefix-LM in the T5 paper. (The original GPTSAN repository calls it hybrid) In GPTSAN, the Prefix part of Prefix-LM, that is, the input position that can be referenced by both tokens, can be specified with any length. Arbitrary lengths can also be specified differently for each batch. This length applies to the text entered in prefix_text for the tokenizer. The tokenizer returns the mask of the Prefix part of Prefix-LM as token_type_ids. The model treats the part where token_type_ids is 1 as a Prefix part, that is, the input can refer to both tokens before and after. Tips: Specifying the Prefix part is done with a mask passed to self-attention. When token_type_ids=None or all zero, it is equivalent to regular causal mask for example: x_token = tokenizer(“アイウエ”) input_ids: | SOT | SEG | ア | イ | ウ | エ | token_type_ids: | 1 | 0 | 0 | 0 | 0 | 0 | prefix_lm_mask: SOT | 1 0 0 0 0 0 | SEG | 1 1 0 0 0 0 | ア | 1 1 1 0 0 0 | イ | 1 1 1 1 0 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 1 | x_token = tokenizer("", prefix_text=“アイウエ”) input_ids: | SOT | ア | イ | ウ | エ | SEG | token_type_ids: | 1 | 1 | 1 | 1 | 1 | 0 | prefix_lm_mask: SOT | 1 1 1 1 1 0 | ア | 1 1 1 1 1 0 | イ | 1 1 1 1 1 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 0 | SEG | 1 1 1 1 1 1 | x_token = tokenizer(“ウエ”, prefix_text=“アイ”) input_ids: | SOT | ア | イ | SEG | ウ | エ | token_type_ids: | 1 | 1 | 1 | 0 | 0 | 0 | prefix_lm_mask: SOT | 1 1 1 0 0 0 | ア | 1 1 1 0 0 0 | イ | 1 1 1 0 0 0 | SEG | 1 1 1 1 0 0 | ウ | 1 1 1 1 1 0 | エ | 1 1 1 1 1 1 | Spout Vector A Spout Vector is a special vector for controlling text generation. This vector is treated as the first embedding in self-attention to bring extraneous attention to the generated tokens. In the pre-trained model published from Tanrei/GPTSAN-japanese, the Spout Vector is a 128-dimensional vector that passes through 8 fully connected layers in the model and is projected into the space acting as external attention. The Spout Vector projected by the fully connected layer is split to be passed to all self-attentions. GPTSanJapaneseConfig class transformers.GPTSanJapaneseConfig < source > ( vocab_size = 36000 max_position_embeddings = 1280 d_model = 1024 d_ff = 8192 d_ext = 4096 d_spout = 128 num_switch_layers = 10 num_ext_layers = 0 num_heads = 16 num_experts = 16 expert_capacity = 128 dropout_rate = 0.0 layer_norm_epsilon = 1e-05 router_bias = False router_jitter_noise = 0.0 router_dtype = 'float32' router_ignore_padding_tokens = False output_hidden_states = False output_attentions = False initializer_factor = 0.002 output_router_logits = False use_cache = True separator_token_id = 35998 pad_token_id = 35995 eos_token_id = 35999 **kwargs ) Parameters vocab_size (int, optional, defaults to 36000) — Vocabulary size of the GPTSANJapanese model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPTSanJapaneseModel. max_position_embeddings (int, optional, defaults to 1280) — The maximum sequence length that this model might ever be used with. Defaults set this to 1280. d_model (int, optional, defaults to 1024) — Size of the encoder layers and the pooler layer. d_ff (int, optional, defaults to 8192) — Size of the intermediate feed forward layer in each SwitchTransformersBlock. d_ext (int, optional, defaults to 4096) — Size of the intermediate feed forward layer in each Extra-layers. d_spout (int, optional, defaults to 128) — Size of the spout vector. num_switch_layers (int, optional, defaults to 10) — Number of layers in the Switch Transformer layer. num_ext_layers (int, optional, defaults to 0) — Number of layers in the Extra-layers. num_heads (int, optional, defaults to 16) — Number of attention heads for each attention layer in the Transformer encoder. num_experts (int, optional, defaults to 16) — Number of experts for each SwitchTransformer layer. expert_capacity (int, optional, defaults to 128) — Number of tokens that can be stored in each expert. If set to 1, the model will behave like a regular Transformer. dropout_rate (float, optional, defaults to 0.0) — The ratio for all dropout layers. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. router_bias (bool, optional, defaults to False) — Whether to add a bias to the router. router_jitter_noise (float, optional, defaults to 0.0) — Amount of noise to add to the router. Set it to 0.0 during prediction or set small value (usually 1e-2) during training. router_dtype (str, optional, default to "float32") — The dtype used for the routers. It is preferable to keep the dtype to "float32" as specified in the selective precision discussion in the paper. router_ignore_padding_tokens (bool, optional, defaults to False) — Whether to ignore padding tokens when routing. output_hidden_states (bool, optional, default to False) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. output_attentions (bool, optional, defaults to False) — Whether or not to return the attentions tensors of all attention layers. initializer_factor (float, optional, defaults to 0.002) — A factor for initializing all weight matrices. output_router_logits (bool, optional, default to False) — Whether or not to return the router logits of all experts. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models) This is the configuration class to store the configuration of a GPTSanJapaneseModel. It is used to instantiate a GPTSANJapanese model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPTSANJapanese Tanrei/GPTSAN-japanese architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. GPTSanJapaneseTokenizer class transformers.GPTSanJapaneseTokenizer < source > ( vocab_file emoji_file unk_token = '<|nottoken|>' pad_token = '<|separator|>' bos_token = '<|startoftext|>' eos_token = '<|endoftext|>' sep_token = '<|segmenter|>' do_clean_text = False **kwargs ) Parameters vocab_file (str) — File containing the vocabulary. emoji_file (str) — File containing the emoji. unk_token (str, optional, defaults to "<|nottoken|>") — The token used for unknown charactor pad_token (str, optional, defaults to "<|separator|>") — The token used for padding bos_token (str, optional, defaults to "<|startoftext|>"") — The beginning of sequence token. eos_token (str, optional, defaults to "<|endoftext|>") — The end of sequence token. sep_token (str, optional, defaults to "<|segmenter|>") — A special token to separate token to prefix part and general input part. do_clean_text (bool, optional, defaults to False) — Whether or not to clean text for URL, EMAIL, TEL, Japanese DATE and Japanese PRICE. This tokenizer is based on GPTNeoXJapaneseTokenizer and has the following modifications Decoding byte0~byte255 tokens correctly Added bagofword token handling Return token_type_ids for Prefix-LM model The bagofword token represents a repetition of the previous token and is converted to 3 consecutive tokens when decoding In addition, the original Japanese special Sub-Word-Encoding has been released in this repository (https://github.com/tanreinama/Japanese-BPEEncoder_V2). The token_type_ids is a mask indicating the prefix input position of the Prefix-LM model. To specify a prefix position, specify a prefix input for prefix_text, or specify a sentence of the prefix part and the part after it as a text pair of batch input. Example: >>> from transformers import GPTSanJapaneseTokenizer >>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> >>> tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"] [35993, 35998, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281] >>> >>> tokenizer.decode(tokenizer("吾輩は猫である🐯。実は慶応(慶應)大学出身")["input_ids"]) '吾輩は猫である🐯。実は慶応(慶応)大学出身' Example for Prefix-LM: >>> from transformers import GPTSanJapaneseTokenizer >>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["input_ids"] [35993, 34347, 31459, 30647, 31448, 25, 30659, 35729, 35676, 35998, 32417, 30647, 17750, 35589, 17750, 35590, 321, 1281] >>> >>> tokenizer("実は慶応(慶應)大学出身", prefix_text="吾輩は猫である🐯。")["token_type_ids"] [1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0] Example for batch encode: >>> from transformers import GPTSanJapaneseTokenizer >>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["input_ids"] [[35993, 8640, 25948, 35998, 30647, 35675, 35999, 35999], [35993, 10382, 9868, 35998, 30646, 9459, 30646, 35675]] >>> >>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["token_type_ids"] [[1, 1, 1, 0, 0, 0, 0, 0], [1, 1, 1, 0, 0, 0, 0, 0]] >>> >>> tokenizer([["武田信玄", "は、"], ["織田信長", "の配下の、"]], padding=True)["attention_mask"] [[1, 1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1, 1]] Converts a sequence of tokens (string) in a single string. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) The tokenizer returns token_type_ids as separators between the Prefix part and the rest. token_type_ids is 1 for the Prefix part and 0 for the rest of the token. Example: >>> from transformers import GPTSanJapaneseTokenizer >>> tokenizer = GPTSanJapaneseTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> x_token = tokenizer("アイウエ") >>> >>> >>> x_token = tokenizer("", prefix_text="アイウエ") >>> >>> >>> x_token = tokenizer("ウエ", prefix_text="アイ") >>> >>> GPTSanJapaneseModel class transformers.GPTSanJapaneseModel < source > ( config: GPTSanJapaneseConfig ) Parameters config (GPTSanJapaneseConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPTSAN-japanese Model transformer outputting raw hidden-states without any specific head on top. The GPTSAN-japanese model was proposed in General-purpose Swich transformer based Japanese language model This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.FloatTensor] = None spout: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None head_mask: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = False inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = None num_precontext: typing.Optional[torch.LongTensor] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. GPTSAN-japanese is a model that generates sentence continuations or predicts tokens at mask positions. Special tokens required for inputs to the model are automatically appended. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.FloatTensor of shape (batch_size, sequence_length), optional) — An input that masks the Prefix part in the Prefix-LM input. Mask values selected in [0, 1]: 1 for tokens that are prefix input, 0 for tokens that are not-prefix input. spout (torch.Tensor of shape (batch_size, config.d_spout)) — This vector is transformed through an 8-layer FFN and can be used instead of past_key_values. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models. num_precontext (torch.LongTensor of shape (batch_size,1)) — length of hybrid input tokens in the input. Tokens up to this length refer to both front and back like BERT, tokens after that refer only to front like GPT. see also: https://github.com/tanreinama/GPTSAN/blob/main/report/model.md The GPTSanJapaneseModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. GPTSanJapaneseForConditionalGeneration class transformers.GPTSanJapaneseForConditionalGeneration < source > ( config: GPTSanJapaneseConfig ) Parameters config (GPTSanJapaneseConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPTSAN-japanese Model with a language modeling head. The GPTSAN-japanese model was proposed in General-purpose Swich transformer based Japanese language model This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.FloatTensor] = None spout: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None head_mask: typing.Optional[torch.FloatTensor] = None use_cache: typing.Optional[bool] = False inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None output_router_logits: typing.Optional[bool] = None labels: typing.Optional[torch.LongTensor] = None ) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. GPTSAN-japanese is a model that generates sentence continuations or predicts tokens at mask positions. Special tokens required for inputs to the model are automatically appended. attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.FloatTensor of shape (batch_size, sequence_length), optional) — An input that masks the Prefix part in the Prefix-LM input. Mask values selected in [0, 1]: 1 for tokens that are prefix input, 0 for tokens that are not-prefix input. spout (torch.Tensor of shape (batch_size, config.d_spout)) — This vector is transformed through an 8-layer FFN and can be used instead of past_key_values. past_key_values (tuple(tuple(torch.FloatTensor)) of length config.n_layers with each tuple having 4 tensors of shape (batch_size, num_heads, sequence_length - 1, embed_size_per_head)) — Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. If past_key_values are used, the user can optionally input only the last decoder_input_ids (those that don’t have their past key value states given to this model) of shape (batch_size, 1) instead of all decoder_input_ids of shape (batch_size, sequence_length). head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. decoder_inputs_embeds (torch.FloatTensor of shape (batch_size, target_sequence_length, hidden_size), optional) — Optionally, instead of passing decoder_input_ids you can choose to directly pass an embedded representation. If past_key_values is used, optionally only the last decoder_inputs_embeds have to be input (see past_key_values). This is useful if you want more control over how to convert decoder_input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. router_logits (tuple(torch.FloatTensor), optional, returned when output_router_logits=True is passed or when config.add_router_probs=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, sequence_length, num_experts). Router logits of the decoder model, useful to compute the auxiliary loss for Mixture of Experts models. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification loss. Indices should be in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] The GPTSanJapaneseForConditionalGeneration forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: Text Generation with regular LM Model >>> from transformers import AutoModel, AutoTokenizer, trainer_utils >>> device = "cuda" >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> x_token = tokenizer("織田信長は、", return_tensors="pt") >>> trainer_utils.set_seed(30) >>> input_ids = x_token.input_ids.to(device) >>> gen_token = model.generate(input_ids, max_new_tokens=50) >>> tokenizer.decode(gen_token[0]) "織田信長は、政治・軍事の中枢まで掌握した政治家であり、日本史上類を見ない驚異的な軍事侵攻を続け..." Text Generation with Prefix-LM Model >>> from transformers import AutoModel, AutoTokenizer, trainer_utils >>> device = "cuda" >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> x_token = tokenizer("", prefix_text="織田信長は、", return_tensors="pt") >>> trainer_utils.set_seed(30) >>> input_ids = x_token.input_ids.to(device) >>> token_type_ids = x_token.token_type_ids.to(device) >>> gen_token = model.generate(input_ids, token_type_ids=token_type_ids, max_new_tokens=50) >>> tokenizer.decode(gen_token[0]) "織田信長は、政治・外交で数々の戦果を上げるが、1568年からは、いわゆる本能寺の変で細川晴元に暗殺される..." Simultaneously Text Generation And Masked Language Model >>> from transformers import AutoModel, AutoTokenizer, trainer_utils >>> device = "cuda" >>> model = AutoModel.from_pretrained("Tanrei/GPTSAN-japanese").to(device) >>> tokenizer = AutoTokenizer.from_pretrained("Tanrei/GPTSAN-japanese") >>> masked_sentence = "武田信玄は、<|inputmask|>時代ファンならぜひ押さえ<|inputmask|>きたい名将の一人。" >>> x_token = tokenizer("", prefix_text=masked_sentence, return_tensors="pt") >>> trainer_utils.set_seed(30) >>> input_ids = x_token.input_ids.to(device) >>> token_type_ids = x_token.token_type_ids.to(device) >>> out_lm_token = model.generate(input_ids, token_type_ids=token_type_ids, max_new_tokens=50) >>> out_mlm_token = model(input_ids, token_type_ids=token_type_ids).logits.argmax(axis=-1) >>> tokenizer.decode(out_mlm_token[0]) "武田信玄は、戦国時代ファンならぜひ押さえておきたい名将の一人。" >>> tokenizer.decode(out_lm_token[0][input_ids.shape[1] :]) "武田氏の三代に渡った武田家のひとり\n甲斐市に住む、日本史上最大の戦国大名。..."
https://huggingface.co/docs/transformers/model_doc/donut
Donut Overview The Donut model was proposed in OCR-free Document Understanding Transformer by Geewook Kim, Teakgyu Hong, Moonbin Yim, Jeongyeon Nam, Jinyoung Park, Jinyeong Yim, Wonseok Hwang, Sangdoo Yun, Dongyoon Han, Seunghyun Park. Donut consists of an image Transformer encoder and an autoregressive text Transformer decoder to perform document understanding tasks such as document image classification, form understanding and visual question answering. The abstract from the paper is the following: Understanding document images (e.g., invoices) is a core but challenging task since it requires complex functions such as reading text and a holistic understanding of the document. Current Visual Document Understanding (VDU) methods outsource the task of reading text to off-the-shelf Optical Character Recognition (OCR) engines and focus on the understanding task with the OCR outputs. Although such OCR-based approaches have shown promising performance, they suffer from 1) high computational costs for using OCR; 2) inflexibility of OCR models on languages or types of document; 3) OCR error propagation to the subsequent process. To address these issues, in this paper, we introduce a novel OCR-free VDU model named Donut, which stands for Document understanding transformer. As the first step in OCR-free VDU research, we propose a simple architecture (i.e., Transformer) with a pre-training objective (i.e., cross-entropy loss). Donut is conceptually simple yet effective. Through extensive experiments and analyses, we show a simple OCR-free VDU model, Donut, achieves state-of-the-art performances on various VDU tasks in terms of both speed and accuracy. In addition, we offer a synthetic data generator that helps the model pre-training to be flexible in various languages and domains. Donut high-level overview. Taken from the original paper. This model was contributed by nielsr. The original code can be found here. Tips: The quickest way to get started with Donut is by checking the tutorial notebooks, which show how to use the model at inference time as well as fine-tuning on custom data. Donut is always used within the VisionEncoderDecoder framework. Inference Donut’s VisionEncoderDecoder model accepts images as input and makes use of generate() to autoregressively generate text given the input image. The DonutImageProcessor class is responsible for preprocessing the input image and [XLMRobertaTokenizer/XLMRobertaTokenizerFast] decodes the generated target tokens to the target string. The DonutProcessor wraps DonutImageProcessor and [XLMRobertaTokenizer/XLMRobertaTokenizerFast] into a single instance to both extract the input features and decode the predicted token ids. Step-by-step Document Image Classification >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-rvlcdip") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) >>> >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[1]["image"] >>> >>> task_prompt = "<s_rvlcdip>" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() >>> print(processor.token2json(sequence)) {'class': 'advertisement'} Step-by-step Document Parsing >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-cord-v2") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) >>> >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[2]["image"] >>> >>> task_prompt = "<s_cord-v2>" >>> decoder_input_ids = processor.tokenizer(task_prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() >>> print(processor.token2json(sequence)) {'menu': {'nm': 'CINNAMON SUGAR', 'unitprice': '17,000', 'cnt': '1 x', 'price': '17,000'}, 'sub_total': {'subtotal_price': '17,000'}, 'total': {'total_price': '17,000', 'cashprice': '20,000', 'changeprice': '3,000'}} Step-by-step Document Visual Question Answering (DocVQA) >>> import re >>> from transformers import DonutProcessor, VisionEncoderDecoderModel >>> from datasets import load_dataset >>> import torch >>> processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") >>> model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa") >>> device = "cuda" if torch.cuda.is_available() else "cpu" >>> model.to(device) >>> >>> dataset = load_dataset("hf-internal-testing/example-documents", split="test") >>> image = dataset[0]["image"] >>> >>> task_prompt = "<s_docvqa><s_question>{user_input}</s_question><s_answer>" >>> question = "When is the coffee break?" >>> prompt = task_prompt.replace("{user_input}", question) >>> decoder_input_ids = processor.tokenizer(prompt, add_special_tokens=False, return_tensors="pt").input_ids >>> pixel_values = processor(image, return_tensors="pt").pixel_values >>> outputs = model.generate( ... pixel_values.to(device), ... decoder_input_ids=decoder_input_ids.to(device), ... max_length=model.decoder.config.max_position_embeddings, ... pad_token_id=processor.tokenizer.pad_token_id, ... eos_token_id=processor.tokenizer.eos_token_id, ... use_cache=True, ... bad_words_ids=[[processor.tokenizer.unk_token_id]], ... return_dict_in_generate=True, ... ) >>> sequence = processor.batch_decode(outputs.sequences)[0] >>> sequence = sequence.replace(processor.tokenizer.eos_token, "").replace(processor.tokenizer.pad_token, "") >>> sequence = re.sub(r"<.*?>", "", sequence, count=1).strip() >>> print(processor.token2json(sequence)) {'question': 'When is the coffee break?', 'answer': '11-14 to 11:39 a.m.'} See the model hub to look for Donut checkpoints. Training We refer to the tutorial notebooks. DonutSwinConfig class transformers.DonutSwinConfig < source > ( image_size = 224 patch_size = 4 num_channels = 3 embed_dim = 96 depths = [2, 2, 6, 2] num_heads = [3, 6, 12, 24] window_size = 7 mlp_ratio = 4.0 qkv_bias = True hidden_dropout_prob = 0.0 attention_probs_dropout_prob = 0.0 drop_path_rate = 0.1 hidden_act = 'gelu' use_absolute_embeddings = False initializer_range = 0.02 layer_norm_eps = 1e-05 **kwargs ) Parameters image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 4) — The size (resolution) of each patch. num_channels (int, optional, defaults to 3) — The number of input channels. embed_dim (int, optional, defaults to 96) — Dimensionality of patch embedding. depths (list(int), optional, defaults to [2, 2, 6, 2]) — Depth of each layer in the Transformer encoder. num_heads (list(int), optional, defaults to [3, 6, 12, 24]) — Number of attention heads in each layer of the Transformer encoder. window_size (int, optional, defaults to 7) — Size of windows. mlp_ratio (float, optional, defaults to 4.0) — Ratio of MLP hidden dimensionality to embedding dimensionality. qkv_bias (bool, optional, defaults to True) — Whether or not a learnable bias should be added to the queries, keys and values. hidden_dropout_prob (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings and encoder. attention_probs_dropout_prob (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. drop_path_rate (float, optional, defaults to 0.1) — Stochastic depth rate. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder. If string, "gelu", "relu", "selu" and "gelu_new" are supported. use_absolute_embeddings (bool, optional, defaults to False) — Whether or not to add absolute position embeddings to the patch embeddings. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. This is the configuration class to store the configuration of a DonutSwinModel. It is used to instantiate a Donut model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Donut naver-clova-ix/donut-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import DonutSwinConfig, DonutSwinModel >>> >>> configuration = DonutSwinConfig() >>> >>> model = DonutSwinModel(configuration) >>> >>> configuration = model.config DonutImageProcessor class transformers.DonutImageProcessor < source > ( do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_thumbnail: bool = True do_align_long_axis: bool = False do_pad: bool = True do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None **kwargs ) Parameters do_resize (bool, optional, defaults to True) — Whether to resize the image’s (height, width) dimensions to the specified size. Can be overridden by do_resize in the preprocess method. size (Dict[str, int] optional, defaults to {"shortest_edge" -- 224}): Size of the image after resizing. The shortest edge of the image is resized to size[“shortest_edge”], with the longest edge resized to keep the input aspect ratio. Can be overridden by size in the preprocess method. resample (PILImageResampling, optional, defaults to PILImageResampling.BILINEAR) — Resampling filter to use if resizing the image. Can be overridden by resample in the preprocess method. do_thumbnail (bool, optional, defaults to True) — Whether to resize the image using thumbnail method. do_align_long_axis (bool, optional, defaults to False) — Whether to align the long axis of the image with the long axis of size by rotating by 90 degrees. do_pad (bool, optional, defaults to True) — Whether to pad the image. If random_padding is set to True in preprocess, each image is padded with a random amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are padded to the largest image size in the batch. do_rescale (bool, optional, defaults to True) — Whether to rescale the image by the specified scale rescale_factor. Can be overridden by do_rescale in the preprocess method. rescale_factor (int or float, optional, defaults to 1/255) — Scale factor to use if rescaling the image. Can be overridden by rescale_factor in the preprocess method. do_normalize — Whether to normalize the image. Can be overridden by do_normalize in the preprocess method. image_mean (float or List[float], optional, defaults to IMAGENET_STANDARD_MEAN) — Mean to use if normalizing the image. This is a float or list of floats the length of the number of channels in the image. Can be overridden by the image_mean parameter in the preprocess method. image_std (float or List[float], optional, defaults to IMAGENET_STANDARD_STD) — Image standard deviation. Constructs a Donut image processor. preprocess < source > ( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), typing.List[ForwardRef('PIL.Image.Image')], typing.List[numpy.ndarray], typing.List[ForwardRef('torch.Tensor')]] do_resize: bool = None size: typing.Dict[str, int] = None resample: Resampling = None do_thumbnail: bool = None do_align_long_axis: bool = None do_pad: bool = None random_padding: bool = False do_rescale: bool = None rescale_factor: float = None do_normalize: bool = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Optional[transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None **kwargs ) Parameters images (ImageInput) — Image to preprocess. Expects a single or batch of images with pixel values ranging from 0 to 255. If passing in images with pixel values between 0 and 1, set do_rescale=False. do_resize (bool, optional, defaults to self.do_resize) — Whether to resize the image. size (Dict[str, int], optional, defaults to self.size) — Size of the image after resizing. Shortest edge of the image is resized to min(size[“height”], size[“width”]) with the longest edge resized to keep the input aspect ratio. resample (int, optional, defaults to self.resample) — Resampling filter to use if resizing the image. This can be one of the enum PILImageResampling. Only has an effect if do_resize is set to True. do_thumbnail (bool, optional, defaults to self.do_thumbnail) — Whether to resize the image using thumbnail method. do_align_long_axis (bool, optional, defaults to self.do_align_long_axis) — Whether to align the long axis of the image with the long axis of size by rotating by 90 degrees. do_pad (bool, optional, defaults to self.do_pad) — Whether to pad the image. If random_padding is set to True, each image is padded with a random amont of padding on each size, up to the largest image size in the batch. Otherwise, all images are padded to the largest image size in the batch. random_padding (bool, optional, defaults to self.random_padding) — Whether to use random padding when padding the image. If True, each image in the batch with be padded with a random amount of padding on each side up to the size of the largest image in the batch. do_rescale (bool, optional, defaults to self.do_rescale) — Whether to rescale the image pixel values. rescale_factor (float, optional, defaults to self.rescale_factor) — Rescale factor to rescale the image by if do_rescale is set to True. do_normalize (bool, optional, defaults to self.do_normalize) — Whether to normalize the image. image_mean (float or List[float], optional, defaults to self.image_mean) — Image mean to use for normalization. image_std (float or List[float], optional, defaults to self.image_std) — Image standard deviation to use for normalization. return_tensors (str or TensorType, optional) — The type of tensors to return. Can be one of: Unset: Return a list of np.ndarray. TensorType.TENSORFLOW or 'tf': Return a batch of type tf.Tensor. TensorType.PYTORCH or 'pt': Return a batch of type torch.Tensor. TensorType.NUMPY or 'np': Return a batch of type np.ndarray. TensorType.JAX or 'jax': Return a batch of type jax.numpy.ndarray. data_format (ChannelDimension or str, optional, defaults to ChannelDimension.FIRST) — The channel dimension format for the output image. Can be one of: ChannelDimension.FIRST: image in (num_channels, height, width) format. ChannelDimension.LAST: image in (height, width, num_channels) format. Unset: defaults to the channel dimension format of the input image. input_data_format (ChannelDimension or str, optional) — The channel dimension format for the input image. If unset, the channel dimension format is inferred from the input image. Can be one of: "channels_first" or ChannelDimension.FIRST: image in (num_channels, height, width) format. "channels_last" or ChannelDimension.LAST: image in (height, width, num_channels) format. "none" or ChannelDimension.NONE: image in (height, width) format. Preprocess an image or batch of images. DonutFeatureExtractor Preprocess an image or a batch of images. DonutProcessor class transformers.DonutProcessor < source > ( image_processor = None tokenizer = None **kwargs ) Parameters image_processor (DonutImageProcessor) — An instance of DonutImageProcessor. The image processor is a required input. tokenizer ([XLMRobertaTokenizer/XLMRobertaTokenizerFast]) — An instance of [XLMRobertaTokenizer/XLMRobertaTokenizerFast]. The tokenizer is a required input. Constructs a Donut processor which wraps a Donut image processor and an XLMRoBERTa tokenizer into a single processor. DonutProcessor offers all the functionalities of DonutImageProcessor and [XLMRobertaTokenizer/XLMRobertaTokenizerFast]. See the call() and decode() for more information. When used in normal mode, this method forwards all its arguments to AutoImageProcessor’s __call__() and returns its output. If used in the context as_target_processor() this method forwards all its arguments to DonutTokenizer’s ~DonutTokenizer.__call__. Please refer to the doctsring of the above two methods for more information. from_pretrained < source > ( pretrained_model_name_or_path: typing.Union[str, os.PathLike] cache_dir: typing.Union[str, os.PathLike, NoneType] = None force_download: bool = False local_files_only: bool = False token: typing.Union[bool, str, NoneType] = None revision: str = 'main' **kwargs ) Parameters pretrained_model_name_or_path (str or os.PathLike) — This can be either: a string, the model id of a pretrained feature_extractor hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased. a path to a directory containing a feature extractor file saved using the save_pretrained() method, e.g., ./my_model_directory/. a path or url to a saved feature extractor JSON file, e.g., ./my_model_directory/preprocessor_config.json. **kwargs — Additional keyword arguments passed along to both from_pretrained() and ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained. Instantiate a processor associated with a pretrained model. This class method is simply calling the feature extractor from_pretrained(), image processor ImageProcessingMixin and the tokenizer ~tokenization_utils_base.PreTrainedTokenizer.from_pretrained methods. Please refer to the docstrings of the methods above for more information. save_pretrained < source > ( save_directory push_to_hub: bool = False **kwargs ) Parameters save_directory (str or os.PathLike) — Directory where the feature extractor JSON file and the tokenizer files will be saved (directory will be created if it does not exist). push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace). kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method. Saves the attributes of this processor (feature extractor, tokenizer…) in the specified directory so that it can be reloaded using the from_pretrained() method. This class method is simply calling save_pretrained() and save_pretrained(). Please refer to the docstrings of the methods above for more information. This method forwards all its arguments to DonutTokenizer’s batch_decode(). Please refer to the docstring of this method for more information. This method forwards all its arguments to DonutTokenizer’s decode(). Please refer to the docstring of this method for more information. DonutSwinModel class transformers.DonutSwinModel < source > ( config add_pooling_layer = True use_mask_token = False ) Parameters config (DonutSwinConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Donut Swin Model transformer outputting raw hidden-states without any specific head on top. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None bool_masked_pos: typing.Optional[torch.BoolTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See DonutImageProcessor.call() for details. head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. bool_masked_pos (torch.BoolTensor of shape (batch_size, num_patches)) — Boolean masked positions. Indicates which patches are masked (1) and which aren’t (0). Returns transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput or tuple(torch.FloatTensor) A transformers.models.donut.modeling_donut_swin.DonutSwinModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DonutSwinConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size), optional, returned when add_pooling_layer=True is passed) — Average pooling of the last layer hidden-state. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each stage) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. reshaped_hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each stage) of shape (batch_size, hidden_size, height, width). Hidden-states of the model at the output of each layer plus the initial embedding outputs reshaped to include the spatial dimensions. The DonutSwinModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoImageProcessor, DonutSwinModel >>> import torch >>> from datasets import load_dataset >>> dataset = load_dataset("huggingface/cats-image") >>> image = dataset["test"]["image"][0] >>> image_processor = AutoImageProcessor.from_pretrained("https://huggingface.co/naver-clova-ix/donut-base") >>> model = DonutSwinModel.from_pretrained("https://huggingface.co/naver-clova-ix/donut-base") >>> inputs = image_processor(image, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state >>> list(last_hidden_states.shape) [1, 49, 768]
https://huggingface.co/docs/transformers/model_doc/gpt-sw3
GPT-Sw3 Overview The GPT-Sw3 model was first proposed in Lessons Learned from GPT-SW3: Building the First Large-Scale Generative Language Model for Swedish by Ariel Ekgren, Amaru Cuba Gyllensten, Evangelia Gogoulou, Alice Heiman, Severine Verlinden, Joey Öhman, Fredrik Carlsson, Magnus Sahlgren. Since that first paper the authors have extended their work and trained new models on their new 1.2TB corpora named The Nordic Pile. GPT-Sw3 is a collection of large decoder-only pretrained transformer language models that were developed by AI Sweden in collaboration with RISE and the WASP WARA for Media and Language. GPT-Sw3 has been trained on a dataset containing 320B tokens in Swedish, Norwegian, Danish, Icelandic, English, and programming code. The model was pretrained using a causal language modeling (CLM) objective utilizing the NeMo Megatron GPT implementation. This model was contributed by AI Sweden. The implementation uses the GPT2Model coupled with our GPTSw3Tokenizer. This means that AutoTokenizer and AutoModelForCausalLM map to our tokenizer implementation and the corresponding GPT2 model implementation respectively. Note that sentencepiece is required to use our tokenizer and can be installed with: pip install transformers[sentencepiece] or pip install sentencepiece Example usage: >>> from transformers import AutoTokenizer, AutoModelForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("AI-Sweden/gpt-sw3-356m") >>> model = AutoModelForCausalLM.from_pretrained("AI-Sweden/gpt-sw3-356m") >>> input_ids = tokenizer("Träd är fina för att", return_tensors="pt")["input_ids"] >>> generated_token_ids = model.generate(inputs=input_ids, max_new_tokens=10, do_sample=True)[0] >>> print(tokenizer.decode(generated_token_ids)) Träd är fina för att de är färgstarka. Men ibland är det fint Documentation resources Text classification task guide Token classification task guide Causal language modeling task guide GPTSw3Tokenizer class transformers.GPTSw3Tokenizer < source > ( vocab_file do_lower_case = False remove_space = False keep_accents = False pad_token = None unk_token = None eos_token = None bos_token = None sp_model_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None **kwargs ) Parameters vocab_file (str) — SentencePiece file (generally has a .spm extension) that contains the vocabulary necessary to instantiate a tokenizer. do_lower_case (bool, optional, defaults to False) — Whether or not to lowercase the input when tokenizing. remove_space (bool, optional, defaults to False) — Whether or not to strip the text when tokenizing (removing excess spaces before and after the string). keep_accents (bool, optional, defaults to False) — Whether or not to keep accents when tokenizing. bos_token (str, optional) — The beginning of sequence token that can be used for downstream task, was not seen during pretraining. If not provided, will default to ’’ or ’<|endoftext|>’, depending on model size. eos_token (str, optional) — The end of sequence token seen during pretraining. If not provided, will default to ’<|endoftext|>’ unk_token (str, optional) — The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead. If not provided, will default to ’‘. pad_token (str, optional) — The token used for padding, for example when batching sequences of different lengths. If not provided, will default to ’’ or ’’ depending on model size. sp_model_kwargs (dict, optional) — Will be passed to the SentencePieceProcessor.__init__() method. The Python wrapper for SentencePiece can be used, among other things, to set: enable_sampling: Enable subword regularization. nbest_size: Sampling parameters for unigram. Invalid for BPE-Dropout. nbest_size = {0,1}: No sampling is performed. nbest_size > 1: samples from the nbest_size results. nbest_size < 0: assuming that nbest_size is infinite and samples from the all hypothesis (lattice) using forward-filtering-and-backward-sampling algorithm. alpha: Smoothing parameter for unigram sampling, and dropout probability of merge operations for BPE-dropout. sp_model (SentencePieceProcessor) — The SentencePiece processor that is used for every conversion (string, tokens and IDs). whitespaces (set) — The whitespaces that are replaced in the whitespace normalization in preprocessing. non_printing_characters_re (Pattern) — The compiled regular expression to remove non-printing characters in preprocessing. Construct an GPTSw3 tokenizer. Based on SentencePiece. This tokenizer inherits from PreTrainedTokenizer which contains most of the main methods. Users should refer to this superclass for more information regarding those methods. Example usage: >>> from transformers import GPTSw3Tokenizer >>> tokenizer = GPTSw3Tokenizer.from_pretrained("AI-Sweden/gpt-sw3-126m") >>> tokenizer("Svenska är kul!")["input_ids"] [1814, 377, 3617, 63504] save_vocabulary < source > ( save_directory: str filename_prefix: typing.Optional[str] = None )
https://huggingface.co/docs/transformers/model_doc/graphormer
Graphormer Overview The Graphormer model was proposed in Do Transformers Really Perform Bad for Graph Representation? by Chengxuan Ying, Tianle Cai, Shengjie Luo, Shuxin Zheng, Guolin Ke, Di He, Yanming Shen and Tie-Yan Liu. It is a Graph Transformer model, modified to allow computations on graphs instead of text sequences by generating embeddings and features of interest during preprocessing and collation, then using a modified attention. The abstract from the paper is the following: The Transformer architecture has become a dominant choice in many domains, such as natural language processing and computer vision. Yet, it has not achieved competitive performance on popular leaderboards of graph-level prediction compared to mainstream GNN variants. Therefore, it remains a mystery how Transformers could perform well for graph representation learning. In this paper, we solve this mystery by presenting Graphormer, which is built upon the standard Transformer architecture, and could attain excellent results on a broad range of graph representation learning tasks, especially on the recent OGB Large-Scale Challenge. Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model. To this end, we propose several simple yet effective structural encoding methods to help Graphormer better model graph-structured data. Besides, we mathematically characterize the expressive power of Graphormer and exhibit that with our ways of encoding the structural information of graphs, many popular GNN variants could be covered as the special cases of Graphormer. Tips: This model will not work well on large graphs (more than 100 nodes/edges), as it will make the memory explode. You can reduce the batch size, increase your RAM, or decrease the UNREACHABLE_NODE_DISTANCE parameter in algos_graphormer.pyx, but it will be hard to go above 700 nodes/edges. This model does not use a tokenizer, but instead a special collator during training. This model was contributed by clefourrier. The original code can be found here. GraphormerConfig class transformers.GraphormerConfig < source > ( num_classes: int = 1 num_atoms: int = 4608 num_edges: int = 1536 num_in_degree: int = 512 num_out_degree: int = 512 num_spatial: int = 512 num_edge_dis: int = 128 multi_hop_max_dist: int = 5 spatial_pos_max: int = 1024 edge_type: str = 'multi_hop' max_nodes: int = 512 share_input_output_embed: bool = False num_hidden_layers: int = 12 embedding_dim: int = 768 ffn_embedding_dim: int = 768 num_attention_heads: int = 32 dropout: float = 0.1 attention_dropout: float = 0.1 activation_dropout: float = 0.1 layerdrop: float = 0.0 encoder_normalize_before: bool = False pre_layernorm: bool = False apply_graphormer_init: bool = False activation_fn: str = 'gelu' embed_scale: float = None freeze_embeddings: bool = False num_trans_layers_to_freeze: int = 0 traceable: bool = False q_noise: float = 0.0 qn_block_size: int = 8 kdim: int = None vdim: int = None bias: bool = True self_attention: bool = True pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 **kwargs ) Parameters num_classes (int, optional, defaults to 1) — Number of target classes or labels, set to n for binary classification of n tasks. num_atoms (int, optional, defaults to 512*9) — Number of node types in the graphs. num_edges (int, optional, defaults to 512*3) — Number of edges types in the graph. num_in_degree (int, optional, defaults to 512) — Number of in degrees types in the input graphs. num_out_degree (int, optional, defaults to 512) — Number of out degrees types in the input graphs. num_edge_dis (int, optional, defaults to 128) — Number of edge dis in the input graphs. multi_hop_max_dist (int, optional, defaults to 20) — Maximum distance of multi hop edges between two nodes. spatial_pos_max (int, optional, defaults to 1024) — Maximum distance between nodes in the graph attention bias matrices, used during preprocessing and collation. edge_type (str, optional, defaults to multihop) — Type of edge relation chosen. max_nodes (int, optional, defaults to 512) — Maximum number of nodes which can be parsed for the input graphs. share_input_output_embed (bool, optional, defaults to False) — Shares the embedding layer between encoder and decoder - careful, True is not implemented. num_layers (int, optional, defaults to 12) — Number of layers. embedding_dim (int, optional, defaults to 768) — Dimension of the embedding layer in encoder. ffn_embedding_dim (int, optional, defaults to 768) — Dimension of the “intermediate” (often named feed-forward) layer in encoder. num_attention_heads (int, optional, defaults to 32) — Number of attention heads in the encoder. self_attention (bool, optional, defaults to True) — Model is self attentive (False not implemented). activation_function (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. dropout (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.1) — The dropout probability for the attention weights. activation_dropout (float, optional, defaults to 0.1) — The dropout probability for the activation of the linear transformer layer. layerdrop (float, optional, defaults to 0.0) — The LayerDrop probability for the encoder. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. bias (bool, optional, defaults to True) — Uses bias in the attention module - unsupported at the moment. embed_scale(float, optional, defaults to None) — Scaling factor for the node embeddings. num_trans_layers_to_freeze (int, optional, defaults to 0) — Number of transformer layers to freeze. encoder_normalize_before (bool, optional, defaults to False) — Normalize features before encoding the graph. pre_layernorm (bool, optional, defaults to False) — Apply layernorm before self attention and the feed forward network. Without this, post layernorm will be used. apply_graphormer_init (bool, optional, defaults to False) — Apply a custom graphormer initialisation to the model before training. freeze_embeddings (bool, optional, defaults to False) — Freeze the embedding layer, or train it along the model. encoder_normalize_before (bool, optional, defaults to False) — Apply the layer norm before each encoder block. q_noise (float, optional, defaults to 0.0) — Amount of quantization noise (see “Training with Quantization Noise for Extreme Model Compression”). (For more detail, see fairseq’s documentation on quant_noise). qn_block_size (int, optional, defaults to 8) — Size of the blocks for subsequent quantization with iPQ (see q_noise). kdim (int, optional, defaults to None) — Dimension of the key in the attention, if different from the other values. vdim (int, optional, defaults to None) — Dimension of the value in the attention, if different from the other values. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). traceable (bool, optional, defaults to False) — Changes return value of the encoder’s inner_state to stacked tensors. Example — This is the configuration class to store the configuration of a ~GraphormerModel. It is used to instantiate an Graphormer model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Graphormer graphormer-base-pcqm4mv1 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. GraphormerModel class transformers.GraphormerModel < source > ( config: GraphormerConfig ) The Graphormer model is a graph-encoder model. It goes from a graph to its representation. If you want to use the model for a downstream classification task, use GraphormerForGraphClassification instead. For any other downstream task, feel free to add a new class, or combine this model with a downstream model of your choice, following the example in GraphormerForGraphClassification. forward < source > ( input_nodes: LongTensor input_edges: LongTensor attn_bias: Tensor in_degree: LongTensor out_degree: LongTensor spatial_pos: LongTensor attn_edge_type: LongTensor perturb: typing.Optional[torch.FloatTensor] = None masked_tokens: None = None return_dict: typing.Optional[bool] = None **unused ) GraphormerForGraphClassification class transformers.GraphormerForGraphClassification < source > ( config: GraphormerConfig ) This model can be used for graph-level classification or regression tasks. It can be trained on regression (by setting config.num_classes to 1); there should be one float-type label per graph one task classification (by setting config.num_classes to the number of classes); there should be one integer label per graph binary multi-task classification (by setting config.num_classes to the number of labels); there should be a list of integer labels for each graph. forward < source > ( input_nodes: LongTensor input_edges: LongTensor attn_bias: Tensor in_degree: LongTensor out_degree: LongTensor spatial_pos: LongTensor attn_edge_type: LongTensor labels: typing.Optional[torch.LongTensor] = None return_dict: typing.Optional[bool] = None **unused )
https://huggingface.co/docs/transformers/model_doc/gpt_bigcode
GPTBigCode Overview The GPTBigCode model was proposed in SantaCoder: don’t reach for the stars! by BigCode. The listed authors are: Loubna Ben Allal, Raymond Li, Denis Kocetkov, Chenghao Mou, Christopher Akiki, Carlos Munoz Ferrandis, Niklas Muennighoff, Mayank Mishra, Alex Gu, Manan Dey, Logesh Kumar Umapathi, Carolyn Jane Anderson, Yangtian Zi, Joel Lamy Poirier, Hailey Schoelkopf, Sergey Troshin, Dmitry Abulkhanov, Manuel Romero, Michael Lappert, Francesco De Toni, Bernardo García del Río, Qian Liu, Shamik Bose, Urvashi Bhattacharyya, Terry Yue Zhuo, Ian Yu, Paulo Villegas, Marco Zocca, Sourab Mangrulkar, David Lansky, Huu Nguyen, Danish Contractor, Luis Villa, Jia Li, Dzmitry Bahdanau, Yacine Jernite, Sean Hughes, Daniel Fried, Arjun Guha, Harm de Vries, Leandro von Werra. The abstract from the paper is the following:uery The BigCode project is an open-scientific collaboration working on the responsible development of large language models for code. This tech report describes the progress of the collaboration until December 2022, outlining the current state of the Personally Identifiable Information (PII) redaction pipeline, the experiments conducted to de-risk the model architecture, and the experiments investigating better preprocessing methods for the training data. We train 1.1B parameter models on the Java, JavaScript, and Python subsets of The Stack and evaluate them on the MultiPL-E text-to-code benchmark. We find that more aggressive filtering of near-duplicates can further boost performance and, surprisingly, that selecting files from repositories with 5+ GitHub stars deteriorates performance significantly. Our best model outperforms previous open-source multilingual code generation models (InCoder-6.7B and CodeGen-Multi-2.7B) in both left-to-right generation and infilling on the Java, JavaScript, and Python portions of MultiPL-E, despite being a substantially smaller model. All models are released under an OpenRAIL license at this https URL. The model is a an optimized GPT2 model with support for Multi-Query Attention. Technical details The main differences compared to GPT2. Added support for Multi-Query Attention. Use gelu_pytorch_tanh instead of classic gelu. Avoid unnecessary synchronizations (this has since been added to GPT2 in #20061, but wasn’t in the reference codebase). Use Linear layers instead of Conv1D (good speedup but makes the checkpoints incompatible). Merge _attn and _upcast_and_reordered_attn. Always merge the matmul with scaling. Rename reorder_and_upcast_attn->attention_softmax_in_fp32 Cache the attention mask value to avoid recreating it every time. Use jit to fuse the attention fp32 casting, masking, softmax, and scaling. Combine the attention and causal masks into a single one, pre-computed for the whole model instead of every layer. Merge the key and value caches into one (this changes the format of layer_past/ present, does it risk creating problems?) Use the memory layout (self.num_heads, 3, self.head_dim) instead of (3, self.num_heads, self.head_dim) for the QKV tensor with MHA. (prevents an overhead with the merged key and values, but makes the checkpoints incompatible with the original gpt2 model). You can read more about the optimizations in the original pull request GPTBigCodeConfig class transformers.GPTBigCodeConfig < source > ( vocab_size = 50257 n_positions = 1024 n_embd = 768 n_layer = 12 n_head = 12 n_inner = None activation_function = 'gelu_pytorch_tanh' resid_pdrop = 0.1 embd_pdrop = 0.1 attn_pdrop = 0.1 layer_norm_epsilon = 1e-05 initializer_range = 0.02 scale_attn_weights = True use_cache = True bos_token_id = 50256 eos_token_id = 50256 attention_softmax_in_fp32 = True scale_attention_softmax_in_fp32 = True multi_query = True **kwargs ) Parameters vocab_size (int, optional, defaults to 50257) — Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GPTBigCodeModel. n_positions (int, optional, defaults to 1024) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). n_embd (int, optional, defaults to 768) — Dimensionality of the embeddings and hidden states. n_layer (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. n_head (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. n_inner (int, optional, defaults to None) — Dimensionality of the inner feed-forward layers. None will set it to 4 times n_embd activation_function (str, optional, defaults to "gelu_pytorch_tanh") — Activation function, to be selected in the list ["relu", "silu", "gelu", "tanh", "gelu_new", "gelu_pytorch_tanh"]. resid_pdrop (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. embd_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the embeddings. attn_pdrop (float, optional, defaults to 0.1) — The dropout ratio for the attention. layer_norm_epsilon (float, optional, defaults to 1e-5) — The epsilon to use in the layer normalization layers. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. scale_attn_weights (bool, optional, defaults to True) — Scale attention weights by dividing by sqrt(hidden_size).. use_cache (bool, optional, defaults to True) — Whether or not the model should return the last key/values attentions (not used by all models). attention_softmax_in_fp32 (bool, optional, defaults to True) — Whether to call the fused softmax in float32. scale_attention_softmax_in_fp32 (bool, optional, defaults to True) — Whether to scale the attention softmax in float32. attention_type (bool, optional, defaults to True) — Whether to use Multi-Query Attion (True) or Multi-Head Attention (False). This is the configuration class to store the configuration of a GPTBigCodeModel. It is used to instantiate a GPTBigCode model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPTBigCode gpt_bigcode architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GPTBigCodeConfig, GPTBigCodeModel >>> >>> configuration = GPTBigCodeConfig() >>> >>> model = GPTBigCodeModel(configuration) >>> >>> configuration = model.config GPTBigCodeModel class transformers.GPTBigCodeModel < source > ( config ) Parameters config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare GPT_BIGCODE Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.List[torch.Tensor]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[torch.Tensor] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPastAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTBigCodeConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. If past_key_values is used only the last hidden-state of the sequences of shape (batch_size, 1, hidden_size) is output. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. The GPTBigCodeModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, GPTBigCodeModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder") >>> model = GPTBigCodeModel.from_pretrained("bigcode/gpt_bigcode-santacoder") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state GPTBigCodeForCausalLM class transformers.GPTBigCodeForCausalLM < source > ( config ) Parameters config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPT_BIGCODE Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None encoder_hidden_states: typing.Optional[torch.Tensor] = None encoder_attention_mask: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[torch.Tensor] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.Tensor of shape (batch_size, sequence_length), optional) — Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size] A transformers.modeling_outputs.CausalLMOutputWithCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GPTBigCodeConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Cross attentions weights after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of torch.FloatTensor tuples of length config.n_layers, with each tuple containing the cached key, value states of the self-attention and the cross-attention layers if model is used in encoder-decoder setting. Only relevant if config.is_decoder = True. Contains pre-computed hidden-states (key and values in the attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The GPTBigCodeForCausalLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import torch >>> from transformers import AutoTokenizer, GPTBigCodeForCausalLM >>> tokenizer = AutoTokenizer.from_pretrained("bigcode/gpt_bigcode-santacoder") >>> model = GPTBigCodeForCausalLM.from_pretrained("bigcode/gpt_bigcode-santacoder") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits GPTBigCodeForSequenceClassification class transformers.GPTBigCodeForSequenceClassification < source > ( config ) Parameters config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The GPTBigCode Model transformer with a sequence classification head on top (linear layer). GPTBigCodeForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a pad_token_id is defined in the configuration, it finds the last token that is not a padding token in each row. If no pad_token_id is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens when inputs_embeds are passed instead of input_ids, it does the same (take the last value in each row of the batch). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[torch.Tensor] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.Tensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). The GPTBigCodeForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. GPTBigCodeForTokenClassification class transformers.GPTBigCodeForTokenClassification < source > ( config ) Parameters config (GPTBigCodeConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. GPT_BIGCODE Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.Tensor]]] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None head_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None labels: typing.Optional[torch.Tensor] = None use_cache: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) Parameters input_ids (torch.Tensor of shape (batch_size, input_ids_length)) — input_ids_length = sequence_length if past_key_values is None else past_key_values[0][0].shape[-2] (sequence_length of input past key value states). Indices of input sequence tokens in the vocabulary. If past_key_values is used, only input_ids that do not have their past calculated should be passed as input_ids. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? past_key_values (Tuple[torch.Tensor] of length config.n_layers) — Contains precomputed hidden-states (key and values in the attention blocks) as computed by the model (see past_key_values output below). Can be used to speed up sequential decoding. The input_ids which have their past given to this model should not be passed as input_ids as they have already been computed. attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. If past_key_values is used, attention_mask needs to contain the masking strategy that was used for past_key_values. In other words, the attention_mask always has to have the length: len(past_key_values) + len(input_ids) What are attention masks? token_type_ids (torch.Tensor of shape (batch_size, input_ids_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.Tensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. If past_key_values is used, optionally only the last inputs_embeds have to be input (see past_key_values). use_cache (bool, optional) — If set to True, past_key_values key value states are returned and can be used to speed up decoding (see past_key_values). output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.Tensor of shape (batch_size, sequence_length), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). The GPTBigCodeForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.
https://huggingface.co/docs/transformers/model_doc/dpr
DPR Overview Dense Passage Retrieval (DPR) is a set of tools and models for state-of-the-art open-domain Q&A research. It was introduced in Dense Passage Retrieval for Open-Domain Question Answering by Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih. The abstract from the paper is the following: Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. This model was contributed by lhoestq. The original code can be found here. Tips: DPR consists in three models: Question encoder: encode questions as vectors Context encoder: encode contexts as vectors Reader: extract the answer of the questions inside retrieved contexts, along with a relevance score (high if the inferred span actually answers the question). DPRConfig class transformers.DPRConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 0 position_embedding_type = 'absolute' projection_dim: int = 0 **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the DPR model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of BertModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed into BertModel. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). projection_dim (int, optional, defaults to 0) — Dimension of the projection for the context and question encoders. If it is set to zero (default), then no projection is done. DPRConfig is the configuration class to store the configuration of a DPRModel. This is the configuration class to store the configuration of a DPRContextEncoder, DPRQuestionEncoder, or a DPRReader. It is used to instantiate the components of the DPR model according to the specified arguments, defining the model component architectures. Instantiating a configuration with the defaults will yield a similar configuration to that of the DPRContextEncoder facebook/dpr-ctx_encoder-single-nq-base architecture. This class is a subclass of BertConfig. Please check the superclass for the documentation of all kwargs. Example: >>> from transformers import DPRConfig, DPRContextEncoder >>> >>> configuration = DPRConfig() >>> >>> model = DPRContextEncoder(configuration) >>> >>> configuration = model.config DPRContextEncoderTokenizer class transformers.DPRContextEncoderTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Construct a DPRContextEncoder tokenizer. DPRContextEncoderTokenizer is identical to BertTokenizer and runs end-to-end tokenization: punctuation splitting and wordpiece. Refer to superclass BertTokenizer for usage examples and documentation concerning parameters. DPRContextEncoderTokenizerFast class transformers.DPRContextEncoderTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Construct a “fast” DPRContextEncoder tokenizer (backed by HuggingFace’s tokenizers library). DPRContextEncoderTokenizerFast is identical to BertTokenizerFast and runs end-to-end tokenization: punctuation splitting and wordpiece. Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters. DPRQuestionEncoderTokenizer class transformers.DPRQuestionEncoderTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Constructs a DPRQuestionEncoder tokenizer. DPRQuestionEncoderTokenizer is identical to BertTokenizer and runs end-to-end tokenization: punctuation splitting and wordpiece. Refer to superclass BertTokenizer for usage examples and documentation concerning parameters. DPRQuestionEncoderTokenizerFast class transformers.DPRQuestionEncoderTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) Constructs a “fast” DPRQuestionEncoder tokenizer (backed by HuggingFace’s tokenizers library). DPRQuestionEncoderTokenizerFast is identical to BertTokenizerFast and runs end-to-end tokenization: punctuation splitting and wordpiece. Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters. DPRReaderTokenizer class transformers.DPRReaderTokenizer < source > ( vocab_file do_lower_case = True do_basic_tokenize = True never_split = None unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) → Dict[str, List[List[int]]] Parameters questions (str or List[str]) — The questions to be encoded. You can specify one question for many passages. In this case, the question will be duplicated like [questions] * n_passages. Otherwise you have to specify as many questions as in titles or texts. titles (str or List[str]) — The passages titles to be encoded. This can be a string or a list of strings if there are several passages. texts (str or List[str]) — The passages texts to be encoded. This can be a string or a list of strings if there are several passages. padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values: True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths). truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values: True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are: 'tf': Return TensorFlow tf.constant objects. 'pt': Return PyTorch torch.Tensor objects. 'np': Return Numpy np.ndarray objects. return_attention_mask (bool, optional) — Whether or not to return the attention mask. If not set, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute. What are attention masks? Returns Dict[str, List[List[int]]] A dictionary with the following keys: input_ids: List of token ids to be fed to a model. attention_mask: List of indices specifying which tokens should be attended to by the model. Construct a DPRReader tokenizer. DPRReaderTokenizer is almost identical to BertTokenizer and runs end-to-end tokenization: punctuation splitting and wordpiece. The difference is that is has three inputs strings: question, titles and texts that are combined to be fed to the DPRReader model. Refer to superclass BertTokenizer for usage examples and documentation concerning parameters. Return a dictionary with the token ids of the input strings and other information to give to .decode_best_spans. It converts the strings of a question and different passages (title and text) in a sequence of IDs (integers), using the tokenizer and vocabulary. The resulting input_ids is a matrix of size (n_passages, sequence_length) with the format: [CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids> DPRReaderTokenizerFast class transformers.DPRReaderTokenizerFast < source > ( vocab_file = None tokenizer_file = None do_lower_case = True unk_token = '[UNK]' sep_token = '[SEP]' pad_token = '[PAD]' cls_token = '[CLS]' mask_token = '[MASK]' tokenize_chinese_chars = True strip_accents = None **kwargs ) → Dict[str, List[List[int]]] Parameters questions (str or List[str]) — The questions to be encoded. You can specify one question for many passages. In this case, the question will be duplicated like [questions] * n_passages. Otherwise you have to specify as many questions as in titles or texts. titles (str or List[str]) — The passages titles to be encoded. This can be a string or a list of strings if there are several passages. texts (str or List[str]) — The passages texts to be encoded. This can be a string or a list of strings if there are several passages. padding (bool, str or PaddingStrategy, optional, defaults to False) — Activates and controls padding. Accepts the following values: True or 'longest': Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). 'max_length': Pad to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. False or 'do_not_pad' (default): No padding (i.e., can output a batch with sequences of different lengths). truncation (bool, str or TruncationStrategy, optional, defaults to False) — Activates and controls truncation. Accepts the following values: True or 'longest_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will truncate token by token, removing a token from the longest sequence in the pair if a pair of sequences (or a batch of pairs) is provided. 'only_first': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the first sequence of a pair if a pair of sequences (or a batch of pairs) is provided. 'only_second': Truncate to a maximum length specified with the argument max_length or to the maximum acceptable input length for the model if that argument is not provided. This will only truncate the second sequence of a pair if a pair of sequences (or a batch of pairs) is provided. False or 'do_not_truncate' (default): No truncation (i.e., can output batch with sequence lengths greater than the model maximum admissible input size). max_length (int, optional) — Controls the maximum length to use by one of the truncation/padding parameters. If left unset or set to None, this will use the predefined model maximum length if a maximum length is required by one of the truncation/padding parameters. If the model has no specific maximum input length (like XLNet) truncation/padding to a maximum length will be deactivated. return_tensors (str or TensorType, optional) — If set, will return tensors instead of list of python integers. Acceptable values are: 'tf': Return TensorFlow tf.constant objects. 'pt': Return PyTorch torch.Tensor objects. 'np': Return Numpy np.ndarray objects. return_attention_mask (bool, optional) — Whether or not to return the attention mask. If not set, will return the attention mask according to the specific tokenizer’s default, defined by the return_outputs attribute. What are attention masks? Returns Dict[str, List[List[int]]] A dictionary with the following keys: input_ids: List of token ids to be fed to a model. attention_mask: List of indices specifying which tokens should be attended to by the model. Constructs a “fast” DPRReader tokenizer (backed by HuggingFace’s tokenizers library). DPRReaderTokenizerFast is almost identical to BertTokenizerFast and runs end-to-end tokenization: punctuation splitting and wordpiece. The difference is that is has three inputs strings: question, titles and texts that are combined to be fed to the DPRReader model. Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters. Return a dictionary with the token ids of the input strings and other information to give to .decode_best_spans. It converts the strings of a question and different passages (title and text) in a sequence of IDs (integers), using the tokenizer and vocabulary. The resulting input_ids is a matrix of size (n_passages, sequence_length) with the format: [CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids> DPR specific outputs class transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput < source > ( pooler_output: FloatTensor hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the context representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed contexts for nearest neighbors queries with questions embeddings. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Class for outputs of DPRQuestionEncoder. class transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput < source > ( pooler_output: FloatTensor hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the question representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed questions for nearest neighbors queries with context embeddings. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Class for outputs of DPRQuestionEncoder. class transformers.DPRReaderOutput < source > ( start_logits: FloatTensor end_logits: FloatTensor = None relevance_logits: FloatTensor = None hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None ) Parameters start_logits (torch.FloatTensor of shape (n_passages, sequence_length)) — Logits of the start index of the span for each passage. end_logits (torch.FloatTensor of shape (n_passages, sequence_length)) — Logits of the end index of the span for each passage. relevance_logits (torch.FloatTensor of shape (n_passages, )) — Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the question, compared to all the other passages. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. Class for outputs of DPRQuestionEncoder. DPRContextEncoder class transformers.DPRContextEncoder < source > ( config: DPRConfig ) Parameters config (DPRConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPRContextEncoder transformer outputting pooler outputs as context representations. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be formatted with [CLS] and [SEP] tokens as follows: (a) For sequence pairs (for a pair title+text for example): A transformers.models.dpr.modeling_dpr.DPRContextEncoderOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPRConfig) and inputs. pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the context representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed contexts for nearest neighbors queries with questions embeddings. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DPRContextEncoder forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import DPRContextEncoder, DPRContextEncoderTokenizer >>> tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> model = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] >>> embeddings = model(input_ids).pooler_output DPRQuestionEncoder class transformers.DPRQuestionEncoder < source > ( config: DPRConfig ) Parameters config (DPRConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPRQuestionEncoder transformer outputting pooler outputs as question representations. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None token_type_ids: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be formatted with [CLS] and [SEP] tokens as follows: (a) For sequence pairs (for a pair title+text for example): A transformers.models.dpr.modeling_dpr.DPRQuestionEncoderOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPRConfig) and inputs. pooler_output (torch.FloatTensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the question representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed questions for nearest neighbors queries with context embeddings. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DPRQuestionEncoder forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import DPRQuestionEncoder, DPRQuestionEncoderTokenizer >>> tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") >>> model = DPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base") >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="pt")["input_ids"] >>> embeddings = model(input_ids).pooler_output DPRReader class transformers.DPRReader < source > ( config: DPRConfig ) Parameters config (DPRConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPRReader transformer outputting span predictions. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None inputs_embeds: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.dpr.modeling_dpr.DPRReaderOutput or tuple(torch.FloatTensor) Parameters input_ids (Tuple[torch.LongTensor] of shapes (n_passages, sequence_length)) — Indices of input sequence tokens in the vocabulary. It has to be a sequence triplet with 1) the question and 2) the passages titles and 3) the passages texts To match pretraining, DPR input_ids sequence should be formatted with [CLS] and [SEP] with the format: [CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids> DPR is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. Indices can be obtained using DPRReaderTokenizer. See this class documentation for more details. What are input IDs? attention_mask (torch.FloatTensor of shape (n_passages, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? inputs_embeds (torch.FloatTensor of shape (n_passages, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.models.dpr.modeling_dpr.DPRReaderOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPRConfig) and inputs. start_logits (torch.FloatTensor of shape (n_passages, sequence_length)) — Logits of the start index of the span for each passage. end_logits (torch.FloatTensor of shape (n_passages, sequence_length)) — Logits of the end index of the span for each passage. relevance_logits (torch.FloatTensor of shape (n_passages, )) — Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the question, compared to all the other passages. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The DPRReader forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import DPRReader, DPRReaderTokenizer >>> tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base") >>> model = DPRReader.from_pretrained("facebook/dpr-reader-single-nq-base") >>> encoded_inputs = tokenizer( ... questions=["What is love ?"], ... titles=["Haddaway"], ... texts=["'What Is Love' is a song recorded by the artist Haddaway"], ... return_tensors="pt", ... ) >>> outputs = model(**encoded_inputs) >>> start_logits = outputs.start_logits >>> end_logits = outputs.end_logits >>> relevance_logits = outputs.relevance_logits TFDPRContextEncoder class transformers.TFDPRContextEncoder < source > ( *args **kwargs ) Parameters config (DPRConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPRContextEncoder transformer outputting pooler outputs as context representations. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Tensorflow tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: tf.Tensor | None = None token_type_ids: tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None training: bool = False ) → transformers.models.dpr.modeling_tf_dpr.TFDPRContextEncoderOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be formatted with [CLS] and [SEP] tokens as follows: (a) For sequence pairs (for a pair title+text for example): Returns transformers.models.dpr.modeling_tf_dpr.TFDPRContextEncoderOutput or tuple(tf.Tensor) A transformers.models.dpr.modeling_tf_dpr.TFDPRContextEncoderOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPRConfig) and inputs. pooler_output (tf.Tensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the context representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed contexts for nearest neighbors queries with questions embeddings. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDPRContextEncoder forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import TFDPRContextEncoder, DPRContextEncoderTokenizer >>> tokenizer = DPRContextEncoderTokenizer.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base") >>> model = TFDPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", from_pt=True) >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="tf")["input_ids"] >>> embeddings = model(input_ids).pooler_output TFDPRQuestionEncoder class transformers.TFDPRQuestionEncoder < source > ( *args **kwargs ) Parameters config (DPRConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPRQuestionEncoder transformer outputting pooler outputs as question representations. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Tensorflow tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: tf.Tensor | None = None token_type_ids: tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None training: bool = False ) → transformers.models.dpr.modeling_tf_dpr.TFDPRQuestionEncoderOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. To match pretraining, DPR input sequence should be formatted with [CLS] and [SEP] tokens as follows: (a) For sequence pairs (for a pair title+text for example): Returns transformers.models.dpr.modeling_tf_dpr.TFDPRQuestionEncoderOutput or tuple(tf.Tensor) A transformers.models.dpr.modeling_tf_dpr.TFDPRQuestionEncoderOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPRConfig) and inputs. pooler_output (tf.Tensor of shape (batch_size, embeddings_size)) — The DPR encoder outputs the pooler_output that corresponds to the question representation. Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer. This output is to be used to embed questions for nearest neighbors queries with context embeddings. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDPRQuestionEncoder forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import TFDPRQuestionEncoder, DPRQuestionEncoderTokenizer >>> tokenizer = DPRQuestionEncoderTokenizer.from_pretrained("facebook/dpr-question_encoder-single-nq-base") >>> model = TFDPRQuestionEncoder.from_pretrained("facebook/dpr-question_encoder-single-nq-base", from_pt=True) >>> input_ids = tokenizer("Hello, is my dog cute ?", return_tensors="tf")["input_ids"] >>> embeddings = model(input_ids).pooler_output TFDPRReader class transformers.TFDPRReader < source > ( *args **kwargs ) Parameters config (DPRConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare DPRReader transformer outputting span predictions. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a Tensorflow tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_ids: TFModelInputType | None = None attention_mask: tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: bool | None = None output_hidden_states: bool | None = None return_dict: bool | None = None training: bool = False ) → transformers.models.dpr.modeling_tf_dpr.TFDPRReaderOutput or tuple(tf.Tensor) Parameters input_ids (Numpy array or tf.Tensor of shapes (n_passages, sequence_length)) — Indices of input sequence tokens in the vocabulary. It has to be a sequence triplet with 1) the question and 2) the passages titles and 3) the passages texts To match pretraining, DPR input_ids sequence should be formatted with [CLS] and [SEP] with the format: [CLS] <question token ids> [SEP] <titles ids> [SEP] <texts ids> DPR is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left. Indices can be obtained using DPRReaderTokenizer. See this class documentation for more details. attention_mask (Numpy array or tf.Tensor of shape (n_passages, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? inputs_embeds (Numpy array or tf.Tensor of shape (n_passages, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to False) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns transformers.models.dpr.modeling_tf_dpr.TFDPRReaderOutput or tuple(tf.Tensor) A transformers.models.dpr.modeling_tf_dpr.TFDPRReaderOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DPRConfig) and inputs. start_logits (tf.Tensor of shape (n_passages, sequence_length)) — Logits of the start index of the span for each passage. end_logits (tf.Tensor of shape (n_passages, sequence_length)) — Logits of the end index of the span for each passage. relevance_logits (tf.Tensor of shape (n_passages, )) — Outputs of the QA classifier of the DPRReader that corresponds to the scores of each passage to answer the question, compared to all the other passages. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFDPRReader forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import TFDPRReader, DPRReaderTokenizer >>> tokenizer = DPRReaderTokenizer.from_pretrained("facebook/dpr-reader-single-nq-base") >>> model = TFDPRReader.from_pretrained("facebook/dpr-reader-single-nq-base", from_pt=True) >>> encoded_inputs = tokenizer( ... questions=["What is love ?"], ... titles=["Haddaway"], ... texts=["'What Is Love' is a song recorded by the artist Haddaway"], ... return_tensors="tf", ... ) >>> outputs = model(encoded_inputs) >>> start_logits = outputs.start_logits >>> end_logits = outputs.end_logits >>> relevance_logits = outputs.relevance_logits
https://huggingface.co/docs/transformers/model_doc/herbert
HerBERT Overview The HerBERT model was proposed in KLEJ: Comprehensive Benchmark for Polish Language Understanding by Piotr Rybak, Robert Mroczkowski, Janusz Tracz, and Ireneusz Gawlik. It is a BERT-based Language Model trained on Polish Corpora using only MLM objective with dynamic masking of whole words. The abstract from the paper is the following: In recent years, a series of Transformer-based models unlocked major improvements in general natural language understanding (NLU) tasks. Such a fast pace of research would not be possible without general NLU benchmarks, which allow for a fair comparison of the proposed methods. However, such benchmarks are available only for a handful of languages. To alleviate this issue, we introduce a comprehensive multi-task benchmark for the Polish language understanding, accompanied by an online leaderboard. It consists of a diverse set of tasks, adopted from existing datasets for named entity recognition, question-answering, textual entailment, and others. We also introduce a new sentiment analysis task for the e-commerce domain, named Allegro Reviews (AR). To ensure a common evaluation scheme and promote models that generalize to different NLU tasks, the benchmark includes datasets from varying domains and applications. Additionally, we release HerBERT, a Transformer-based model trained specifically for the Polish language, which has the best average performance and obtains the best results for three out of nine tasks. Finally, we provide an extensive evaluation, including several standard baselines and recently proposed, multilingual Transformer-based models. Examples of use: >>> from transformers import HerbertTokenizer, RobertaModel >>> tokenizer = HerbertTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") >>> model = RobertaModel.from_pretrained("allegro/herbert-klej-cased-v1") >>> encoded_input = tokenizer.encode("Kto ma lepszą sztukę, ma lepszy rząd – to jasne.", return_tensors="pt") >>> outputs = model(encoded_input) >>> >>> import torch >>> from transformers import AutoModel, AutoTokenizer >>> tokenizer = AutoTokenizer.from_pretrained("allegro/herbert-klej-cased-tokenizer-v1") >>> model = AutoModel.from_pretrained("allegro/herbert-klej-cased-v1") This model was contributed by rmroczkowski. The original code can be found here. HerbertTokenizer class transformers.HerbertTokenizer < source > ( vocab_file merges_file tokenizer_file = None cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' sep_token = '</s>' bos_token = '<s>' do_lowercase_and_remove_accent = False additional_special_tokens = ['<special0>', '<special1>', '<special2>', '<special3>', '<special4>', '<special5>', '<special6>', '<special7>', '<special8>', '<special9>'] lang2id = None id2lang = None **kwargs ) Construct a BPE tokenizer for HerBERT. Peculiarities: uses BERT’s pre-tokenizer: BaseTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of a punctuation character will be treated separately. Such pretokenized input is BPE subtokenized This tokenizer inherits from XLMTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An XLM sequence has the following format: single sequence: <s> X </s> pair of sequences: <s> A </s> B </s> Converts a sequence of tokens (string) in a single string. create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. An XLM sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | If token_ids_1 is None, this method only returns the first portion of the mask (0s). get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method. HerbertTokenizerFast class transformers.HerbertTokenizerFast < source > ( vocab_file = None merges_file = None tokenizer_file = None cls_token = '<s>' unk_token = '<unk>' pad_token = '<pad>' mask_token = '<mask>' sep_token = '</s>' **kwargs ) Parameters vocab_file (str) — Path to the vocabulary file. merges_file (str) — Path to the merges file. Construct a “Fast” BPE tokenizer for HerBERT (backed by HuggingFace’s tokenizers library). Peculiarities: uses BERT’s pre-tokenizer: BertPreTokenizer splits tokens on spaces, and also on punctuation. Each occurrence of a punctuation character will be treated separately. This tokenizer inherits from PreTrainedTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods. build_inputs_with_special_tokens < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs to which the special tokens will be added. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of input IDs with the appropriate special tokens. Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An HerBERT, like BERT sequence has the following format: single sequence: <s> X </s> pair of sequences: <s> A </s> B </s> create_token_type_ids_from_sequences < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. List of token type IDs according to the given sequence(s). Create a mask from the two sequences passed to be used in a sequence-pair classification task. HerBERT, like BERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 | first sequence | second sequence | get_special_tokens_mask < source > ( token_ids_0: typing.List[int] token_ids_1: typing.Optional[typing.List[int]] = None already_has_special_tokens: bool = False ) → List[int] Parameters token_ids_0 (List[int]) — List of IDs. token_ids_1 (List[int], optional) — Optional second list of IDs for sequence pairs. already_has_special_tokens (bool, optional, defaults to False) — Whether or not the token list is already formatted with special tokens for the model. A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model method.
https://huggingface.co/docs/transformers/model_doc/groupvit
GroupViT Overview The GroupViT model was proposed in GroupViT: Semantic Segmentation Emerges from Text Supervision by Jiarui Xu, Shalini De Mello, Sifei Liu, Wonmin Byeon, Thomas Breuel, Jan Kautz, Xiaolong Wang. Inspired by CLIP, GroupViT is a vision-language model that can perform zero-shot semantic segmentation on any given vocabulary categories. The abstract from the paper is the following: Grouping and recognition are important components of visual scene understanding, e.g., for object detection and semantic segmentation. With end-to-end deep learning systems, grouping of image regions usually happens implicitly via top-down supervision from pixel-level recognition labels. Instead, in this paper, we propose to bring back the grouping mechanism into deep networks, which allows semantic segments to emerge automatically with only text supervision. We propose a hierarchical Grouping Vision Transformer (GroupViT), which goes beyond the regular grid structure representation and learns to group image regions into progressively larger arbitrary-shaped segments. We train GroupViT jointly with a text encoder on a large-scale image-text dataset via contrastive losses. With only text supervision and without any pixel-level annotations, GroupViT learns to group together semantic regions and successfully transfers to the task of semantic segmentation in a zero-shot manner, i.e., without any further fine-tuning. It achieves a zero-shot accuracy of 52.3% mIoU on the PASCAL VOC 2012 and 22.4% mIoU on PASCAL Context datasets, and performs competitively to state-of-the-art transfer-learning methods requiring greater levels of supervision. Tips: You may specify output_segmentation=True in the forward of GroupViTModel to get the segmentation logits of input texts. This model was contributed by xvjiarui. The TensorFlow version was contributed by ariG23498 with the help of Yih-Dar SHIEH, Amy Roberts, and Joao Gante. The original code can be found here. Resources A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with GroupViT. The quickest way to get started with GroupViT is by checking the example notebooks (which showcase zero-shot segmentation inference). One can also check out the HuggingFace Spaces demo to play with GroupViT. GroupViTConfig class transformers.GroupViTConfig < source > ( text_config = None vision_config = None projection_dim = 256 projection_intermediate_dim = 4096 logit_scale_init_value = 2.6592 **kwargs ) Parameters text_config (dict, optional) — Dictionary of configuration options used to initialize GroupViTTextConfig. vision_config (dict, optional) — Dictionary of configuration options used to initialize GroupViTVisionConfig. projection_dim (int, optional, defaults to 256) — Dimentionality of text and vision projection layers. projection_intermediate_dim (int, optional, defaults to 4096) — Dimentionality of intermediate layer of text and vision projection layers. logit_scale_init_value (float, optional, defaults to 2.6592) — The inital value of the logit_scale parameter. Default is used as per the original GroupViT implementation. kwargs (optional) — Dictionary of keyword arguments. GroupViTConfig is the configuration class to store the configuration of a GroupViTModel. It is used to instantiate a GroupViT model according to the specified arguments, defining the text model and vision model configs. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT nvidia/groupvit-gcc-yfcc architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. from_text_vision_configs < source > ( text_config: GroupViTTextConfig vision_config: GroupViTVisionConfig **kwargs ) → GroupViTConfig An instance of a configuration object Instantiate a GroupViTConfig (or a derived class) from groupvit text model configuration and groupvit vision model configuration. GroupViTTextConfig class transformers.GroupViTTextConfig < source > ( vocab_size = 49408 hidden_size = 256 intermediate_size = 1024 num_hidden_layers = 12 num_attention_heads = 4 max_position_embeddings = 77 hidden_act = 'quick_gelu' layer_norm_eps = 1e-05 dropout = 0.0 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 pad_token_id = 1 bos_token_id = 49406 eos_token_id = 49407 **kwargs ) Parameters vocab_size (int, optional, defaults to 49408) — Vocabulary size of the GroupViT text model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling GroupViTModel. hidden_size (int, optional, defaults to 256) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 1024) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 4) — Number of attention heads for each attention layer in the Transformer encoder. max_position_embeddings (int, optional, defaults to 77) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). hidden_act (str or function, optional, defaults to "quick_gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. dropout (float, optional, defaults to 0.0) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (float, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a GroupViTTextModel. It is used to instantiate an GroupViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT nvidia/groupvit-gcc-yfcc architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GroupViTTextConfig, GroupViTTextModel >>> >>> configuration = GroupViTTextConfig() >>> model = GroupViTTextModel(configuration) >>> >>> configuration = model.config GroupViTVisionConfig class transformers.GroupViTVisionConfig < source > ( hidden_size = 384 intermediate_size = 1536 depths = [6, 3, 3] num_hidden_layers = 12 num_group_tokens = [64, 8, 0] num_output_groups = [64, 8, 8] num_attention_heads = 6 image_size = 224 patch_size = 16 num_channels = 3 hidden_act = 'gelu' layer_norm_eps = 1e-05 dropout = 0.0 attention_dropout = 0.0 initializer_range = 0.02 initializer_factor = 1.0 assign_eps = 1.0 assign_mlp_ratio = [0.5, 4] **kwargs ) Parameters hidden_size (int, optional, defaults to 384) — Dimensionality of the encoder layers and the pooler layer. intermediate_size (int, optional, defaults to 1536) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. depths (List[int], optional, defaults to [6, 3, 3]) — The number of layers in each encoder block. num_group_tokens (List[int], optional, defaults to [64, 8, 0]) — The number of group tokens for each stage. num_output_groups (List[int], optional, defaults to [64, 8, 8]) — The number of output groups for each stage, 0 means no group. num_attention_heads (int, optional, defaults to 6) — Number of attention heads for each attention layer in the Transformer encoder. image_size (int, optional, defaults to 224) — The size (resolution) of each image. patch_size (int, optional, defaults to 16) — The size (resolution) of each patch. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" `"quick_gelu" are supported. layer_norm_eps (float, optional, defaults to 1e-5) — The epsilon used by the layer normalization layers. dropout (float, optional, defaults to 0.0) — The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_dropout (float, optional, defaults to 0.0) — The dropout ratio for the attention probabilities. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. initializer_factor (float, optional, defaults to 1.0) — A factor for initializing all weight matrices (should be kept to 1, used internally for initialization testing). This is the configuration class to store the configuration of a GroupViTVisionModel. It is used to instantiate an GroupViT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GroupViT nvidia/groupvit-gcc-yfcc architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import GroupViTVisionConfig, GroupViTVisionModel >>> >>> configuration = GroupViTVisionConfig() >>> model = GroupViTVisionModel(configuration) >>> >>> configuration = model.config GroupViTModel class transformers.GroupViTModel < source > ( config: GroupViTConfig ) Parameters config (GroupViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model is a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None pixel_values: typing.Optional[torch.FloatTensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.LongTensor] = None return_loss: typing.Optional[bool] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None output_segmentation: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput or tuple(torch.FloatTensor) A transformers.models.groupvit.modeling_groupvit.GroupViTModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTConfig'>) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image (torch.FloatTensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text (torch.FloatTensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. segmentation_logits (torch.FloatTensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel. The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. text_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of GroupViTTextModel. image_embeds (torch.FloatTensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of GroupViTVisionModel. text_model_output (BaseModelOutputWithPooling) — The output of the GroupViTTextModel. vision_model_output (BaseModelOutputWithPooling) — The output of the GroupViTVisionModel. The GroupViTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, GroupViTModel >>> model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="pt", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = logits_per_image.softmax(dim=1) get_text_features < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → text_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns text_features (torch.FloatTensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of GroupViTTextModel. The GroupViTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import CLIPTokenizer, GroupViTModel >>> model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → image_features (torch.FloatTensor of shape (batch_size, output_dim) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. Returns image_features (torch.FloatTensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of GroupViTVisionModel. The GroupViTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, GroupViTModel >>> model = GroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> image_features = model.get_image_features(**inputs) GroupViTTextModel class transformers.GroupViTTextModel < source > ( config: GroupViTTextConfig ) forward < source > ( input_ids: typing.Optional[torch.Tensor] = None attention_mask: typing.Optional[torch.Tensor] = None position_ids: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide it. Indices can be obtained using CLIPTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTTextConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GroupViTTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import CLIPTokenizer, GroupViTTextModel >>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> model = GroupViTTextModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output GroupViTVisionModel class transformers.GroupViTVisionModel < source > ( config: GroupViTVisionConfig ) forward < source > ( pixel_values: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPooling or tuple(torch.FloatTensor) Parameters pixel_values (torch.FloatTensor of shape (batch_size, num_channels, height, width)) — Pixel values. Padding will be ignored by default should you provide it. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPooling or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTVisionConfig'>) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The GroupViTVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, GroupViTVisionModel >>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> model = GroupViTVisionModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output TFGroupViTModel class transformers.TFGroupViTModel < source > ( *args **kwargs ) Parameters config (GroupViTConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TF 2.0 models accepts two formats as inputs: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional arguments. This second option is useful when using tf.keras.Model.fit method which currently requires having all the tensors in the first argument of the model call function: model(inputs). If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument : a single Tensor with input_ids only and nothing else: model(input_ids) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_ids": input_ids, "token_type_ids": token_type_ids}) call < source > ( input_ids: TFModelInputType | None = None pixel_values: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None return_loss: Optional[bool] = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None output_segmentation: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? return_loss (bool, optional) — Whether or not to return the contrastive loss. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput or tuple(tf.Tensor) A transformers.models.groupvit.modeling_tf_groupvit.TFGroupViTModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTConfig'>) and inputs. loss (tf.Tensor of shape (1,), optional, returned when return_loss is True) — Contrastive loss for image-text similarity. logits_per_image (tf.Tensor of shape (image_batch_size, text_batch_size)) — The scaled dot product scores between image_embeds and text_embeds. This represents the image-text similarity scores. logits_per_text (tf.Tensor of shape (text_batch_size, image_batch_size)) — The scaled dot product scores between text_embeds and image_embeds. This represents the text-image similarity scores. segmentation_logits (tf.Tensor of shape (batch_size, config.num_labels, logits_height, logits_width)) — Classification scores for each pixel. The logits returned do not necessarily have the same size as the pixel_values passed as inputs. This is to avoid doing two interpolations and lose some quality when a user needs to resize the logits to the original image size as post-processing. You should always check your logits shape and resize as needed. text_embeds (tf.Tensor of shape (batch_size, output_dim) — The text embeddings obtained by applying the projection layer to the pooled output of TFGroupViTTextModel. image_embeds (tf.Tensor of shape (batch_size, output_dim) — The image embeddings obtained by applying the projection layer to the pooled output of TFGroupViTVisionModel. text_model_output (TFBaseModelOutputWithPooling) — The output of the TFGroupViTTextModel. vision_model_output (TFBaseModelOutputWithPooling) — The output of the TFGroupViTVisionModel. The TFGroupViTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFGroupViTModel >>> import tensorflow as tf >>> model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor( ... text=["a photo of a cat", "a photo of a dog"], images=image, return_tensors="tf", padding=True ... ) >>> outputs = model(**inputs) >>> logits_per_image = outputs.logits_per_image >>> probs = tf.math.softmax(logits_per_image, axis=1) get_text_features < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → text_features (tf.Tensor of shape (batch_size, output_dim) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns text_features (tf.Tensor of shape (batch_size, output_dim) The text embeddings obtained by applying the projection layer to the pooled output of TFGroupViTTextModel. The TFGroupViTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import CLIPTokenizer, TFGroupViTModel >>> model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> text_features = model.get_text_features(**inputs) get_image_features < source > ( pixel_values: TFModelInputType | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → image_features (tf.Tensor of shape (batch_size, output_dim) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). Returns image_features (tf.Tensor of shape (batch_size, output_dim) The image embeddings obtained by applying the projection layer to the pooled output of TFGroupViTVisionModel. The TFGroupViTModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFGroupViTModel >>> model = TFGroupViTModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="tf") >>> image_features = model.get_image_features(**inputs) TFGroupViTTextModel class transformers.TFGroupViTTextModel < source > ( *args **kwargs ) call < source > ( input_ids: TFModelInputType | None = None attention_mask: np.ndarray | tf.Tensor | None = None position_ids: np.ndarray | tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters input_ids (np.ndarray, tf.Tensor, List[tf.Tensor] `Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? position_ids (np.ndarray or tf.Tensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTTextConfig'>) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGroupViTTextModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from transformers import CLIPTokenizer, TFGroupViTTextModel >>> tokenizer = CLIPTokenizer.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> model = TFGroupViTTextModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> inputs = tokenizer(["a photo of a cat", "a photo of a dog"], padding=True, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output TFGroupViTVisionModel class transformers.TFGroupViTVisionModel < source > ( *args **kwargs ) call < source > ( pixel_values: TFModelInputType | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or tuple(tf.Tensor) Parameters pixel_values (np.ndarray, tf.Tensor, List[tf.Tensor], Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape (batch_size, num_channels, height, width)) — Pixel values. Pixel values can be obtained using AutoImageProcessor. See CLIPImageProcessor.call() for details. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutputWithPooling or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (<class 'transformers.models.groupvit.configuration_groupvit.GroupViTVisionConfig'>) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (tf.Tensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence. hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFGroupViTVisionModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Examples: >>> from PIL import Image >>> import requests >>> from transformers import AutoProcessor, TFGroupViTVisionModel >>> processor = AutoProcessor.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> model = TFGroupViTVisionModel.from_pretrained("nvidia/groupvit-gcc-yfcc") >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" >>> image = Image.open(requests.get(url, stream=True).raw) >>> inputs = processor(images=image, return_tensors="tf") >>> outputs = model(**inputs) >>> last_hidden_state = outputs.last_hidden_state >>> pooled_output = outputs.pooler_output
https://huggingface.co/docs/transformers/model_doc/ibert
I-BERT Overview The I-BERT model was proposed in I-BERT: Integer-only BERT Quantization by Sehoon Kim, Amir Gholami, Zhewei Yao, Michael W. Mahoney and Kurt Keutzer. It’s a quantized version of RoBERTa running inference up to four times faster. The abstract from the paper is the following: Transformer based models, like BERT and RoBERTa, have achieved state-of-the-art results in many Natural Language Processing tasks. However, their memory footprint, inference latency, and power consumption are prohibitive for efficient inference at the edge, and even at the data center. While quantization can be a viable solution for this, previous work on quantizing Transformer based models use floating-point arithmetic during inference, which cannot efficiently utilize integer-only logical units such as the recent Turing Tensor Cores, or traditional integer-only ARM processors. In this work, we propose I-BERT, a novel quantization scheme for Transformer based models that quantizes the entire inference with integer-only arithmetic. Based on lightweight integer-only approximation methods for nonlinear operations, e.g., GELU, Softmax, and Layer Normalization, I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation. We evaluate our approach on GLUE downstream tasks using RoBERTa-Base/Large. We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline. Furthermore, our preliminary implementation of I-BERT shows a speedup of 2.4 - 4.0x for INT8 inference on a T4 GPU system as compared to FP32 inference. The framework has been developed in PyTorch and has been open-sourced. This model was contributed by kssteven. The original code can be found here. Documentation resources Text classification task guide Token classification task guide Question answering task guide Masked language modeling task guide Multiple choice task guide IBertConfig class transformers.IBertConfig < source > ( vocab_size = 30522 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout_prob = 0.1 attention_probs_dropout_prob = 0.1 max_position_embeddings = 512 type_vocab_size = 2 initializer_range = 0.02 layer_norm_eps = 1e-12 pad_token_id = 1 bos_token_id = 0 eos_token_id = 2 position_embedding_type = 'absolute' quant_mode = False force_dequant = 'none' **kwargs ) Parameters vocab_size (int, optional, defaults to 30522) — Vocabulary size of the I-BERT model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling IBertModel hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (often named feed-forward) layer in the Transformer encoder. hidden_act (str or Callable, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "silu" and "gelu_new" are supported. hidden_dropout_prob (float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. max_position_embeddings (int, optional, defaults to 512) — The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (int, optional, defaults to 2) — The vocabulary size of the token_type_ids passed when calling IBertModel initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. position_embedding_type (str, optional, defaults to "absolute") — Type of position embedding. Choose one of "absolute", "relative_key", "relative_key_query". For positional embeddings use "absolute". For more information on "relative_key", please refer to Self-Attention with Relative Position Representations (Shaw et al.). For more information on "relative_key_query", please refer to Method 4 in Improve Transformer Models with Better Relative Position Embeddings (Huang et al.). quant_mode (bool, optional, defaults to False) — Whether to quantize the model or not. force_dequant (str, optional, defaults to "none") — Force dequantize specific nonlinear layer. Dequatized layers are then executed with full precision. "none", "gelu", "softmax", "layernorm" and "nonlinear" are supported. As deafult, it is set as "none", which does not dequantize any layers. Please specify "gelu", "softmax", or "layernorm" to dequantize GELU, Softmax, or LayerNorm, respectively. "nonlinear" will dequantize all nonlinear layers, i.e., GELU, Softmax, and LayerNorm. This is the configuration class to store the configuration of a IBertModel. It is used to instantiate a I-BERT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the IBERT kssteven/ibert-roberta-base architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. IBertModel class transformers.IBertModel < source > ( config add_pooling_layer = True ) Parameters config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare I-BERT Model transformer outputting raw hidden-states without any specific head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of cross-attention is added between the self-attention layers, following the architecture described in Attention is all you need by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutputWithPoolingAndCrossAttentions or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. pooler_output (torch.FloatTensor of shape (batch_size, hidden_size)) — Last layer hidden-state of the first token of the sequence (classification token) after further processing through the layers used for the auxiliary pretraining task. E.g. for BERT-family of models, this returns the classification token after processing through a linear layer and a tanh activation function. The linear layer weights are trained from the next sentence prediction (classification) objective during pretraining. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. cross_attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True and config.add_cross_attention=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights of the decoder’s cross-attention layer, after the attention softmax, used to compute the weighted average in the cross-attention heads. past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head). Contains pre-computed hidden-states (key and values in the self-attention blocks and optionally if config.is_encoder_decoder=True in the cross-attention blocks) that can be used (see past_key_values input) to speed up sequential decoding. The IBertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, IBertModel >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertModel.from_pretrained("kssteven/ibert-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state IBertForMaskedLM class transformers.IBertForMaskedLM < source > ( config ) Parameters config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. I-BERT Model with a language modeling head on top. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MaskedLMOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] kwargs (Dict[str, any], optional, defaults to {}) — Used to hide legacy arguments that have been deprecated. A transformers.modeling_outputs.MaskedLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Masked language modeling (MLM) loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The IBertForMaskedLM forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, IBertForMaskedLM >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertForMaskedLM.from_pretrained("kssteven/ibert-roberta-base") >>> inputs = tokenizer("The capital of France is <mask>.", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> >>> mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0] >>> predicted_token_id = logits[0, mask_token_index].argmax(axis=-1) >>> labels = tokenizer("The capital of France is Paris.", return_tensors="pt")["input_ids"] >>> >>> labels = torch.where(inputs.input_ids == tokenizer.mask_token_id, labels, -100) >>> outputs = model(**inputs, labels=labels) IBertForSequenceClassification class transformers.IBertForSequenceClassification < source > ( config ) Parameters config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. I-BERT Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The IBertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example of single-label classification: >>> import torch >>> from transformers import AutoTokenizer, IBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_id = logits.argmax().item() >>> >>> num_labels = len(model.config.id2label) >>> model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base", num_labels=num_labels) >>> labels = torch.tensor([1]) >>> loss = model(**inputs, labels=labels).loss Example of multi-label classification: >>> import torch >>> from transformers import AutoTokenizer, IBertForSequenceClassification >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertForSequenceClassification.from_pretrained("kssteven/ibert-roberta-base", problem_type="multi_label_classification") >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.arange(0, logits.shape[-1])[torch.sigmoid(logits).squeeze(dim=0) > 0.5] >>> >>> num_labels = len(model.config.id2label) >>> model = IBertForSequenceClassification.from_pretrained( ... "kssteven/ibert-roberta-base", num_labels=num_labels, problem_type="multi_label_classification" ... ) >>> labels = torch.sum( ... torch.nn.functional.one_hot(predicted_class_ids[None, :].clone(), num_classes=num_labels), dim=1 ... ).to(torch.float) >>> loss = model(**inputs, labels=labels).loss IBertForMultipleChoice class transformers.IBertForMultipleChoice < source > ( config ) Parameters config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. I-BERT Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a softmax) e.g. for RocStories/SWAG tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.MultipleChoiceModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, num_choices, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, num_choices, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices-1] where num_choices is the size of the second dimension of the input tensors. (See input_ids above) A transformers.modeling_outputs.MultipleChoiceModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, num_choices)) — num_choices is the second dimension of the input tensors. (see input_ids above). Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The IBertForMultipleChoice forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, IBertForMultipleChoice >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertForMultipleChoice.from_pretrained("kssteven/ibert-roberta-base") >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." >>> choice0 = "It is eaten with a fork and a knife." >>> choice1 = "It is eaten while held in the hand." >>> labels = torch.tensor(0).unsqueeze(0) >>> encoding = tokenizer([prompt, prompt], [choice0, choice1], return_tensors="pt", padding=True) >>> outputs = model(**{k: v.unsqueeze(0) for k, v in encoding.items()}, labels=labels) >>> >>> loss = outputs.loss >>> logits = outputs.logits IBertForTokenClassification class transformers.IBertForTokenClassification < source > ( config ) Parameters config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. I-BERT Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for Named-Entity-Recognition (NER) tasks. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.TokenClassifierOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, sequence_length), optional) — Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1]. A transformers.modeling_outputs.TokenClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification loss. logits (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels)) — Classification scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The IBertForTokenClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, IBertForTokenClassification >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertForTokenClassification.from_pretrained("kssteven/ibert-roberta-base") >>> inputs = tokenizer( ... "HuggingFace is a company based in Paris and New York", add_special_tokens=False, return_tensors="pt" ... ) >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_token_class_ids = logits.argmax(-1) >>> >>> >>> >>> predicted_tokens_classes = [model.config.id2label[t.item()] for t in predicted_token_class_ids[0]] >>> labels = predicted_token_class_ids >>> loss = model(**inputs, labels=labels).loss IBertForQuestionAnswering class transformers.IBertForQuestionAnswering < source > ( config ) Parameters config (IBertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. I-BERT Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_ids: typing.Optional[torch.LongTensor] = None attention_mask: typing.Optional[torch.FloatTensor] = None token_type_ids: typing.Optional[torch.LongTensor] = None position_ids: typing.Optional[torch.LongTensor] = None head_mask: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None start_positions: typing.Optional[torch.LongTensor] = None end_positions: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.QuestionAnsweringModelOutput or tuple(torch.FloatTensor) Parameters input_ids (torch.LongTensor of shape (batch_size, sequence_length)) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.encode() and PreTrainedTokenizer.call() for details. What are input IDs? attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional) — Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. start_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. end_positions (torch.LongTensor of shape (batch_size,), optional) — Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss. A transformers.modeling_outputs.QuestionAnsweringModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (IBertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Total span extraction loss is the sum of a Cross-Entropy for the start and end positions. start_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-start scores (before SoftMax). end_logits (torch.FloatTensor of shape (batch_size, sequence_length)) — Span-end scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The IBertForQuestionAnswering forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoTokenizer, IBertForQuestionAnswering >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("kssteven/ibert-roberta-base") >>> model = IBertForQuestionAnswering.from_pretrained("kssteven/ibert-roberta-base") >>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" >>> inputs = tokenizer(question, text, return_tensors="pt") >>> with torch.no_grad(): ... outputs = model(**inputs) >>> answer_start_index = outputs.start_logits.argmax() >>> answer_end_index = outputs.end_logits.argmax() >>> predict_answer_tokens = inputs.input_ids[0, answer_start_index : answer_end_index + 1] >>> >>> target_start_index = torch.tensor([14]) >>> target_end_index = torch.tensor([15]) >>> outputs = model(**inputs, start_positions=target_start_index, end_positions=target_end_index) >>> loss = outputs.loss
https://huggingface.co/docs/transformers/model_doc/hubert
Hubert Overview Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. The abstract from the paper is the following: Self-supervised approaches for speech representation learning are challenged by three unique problems: (1) there are multiple sound units in each input utterance, (2) there is no lexicon of input sound units during the pre-training phase, and (3) sound units have variable lengths with no explicit segmentation. To deal with these three problems, we propose the Hidden-Unit BERT (HuBERT) approach for self-supervised speech representation learning, which utilizes an offline clustering step to provide aligned target labels for a BERT-like prediction loss. A key ingredient of our approach is applying the prediction loss over the masked regions only, which forces the model to learn a combined acoustic and language model over the continuous inputs. HuBERT relies primarily on the consistency of the unsupervised clustering step rather than the intrinsic quality of the assigned cluster labels. Starting with a simple k-means teacher of 100 clusters, and using two iterations of clustering, the HuBERT model either matches or improves upon the state-of-the-art wav2vec 2.0 performance on the Librispeech (960h) and Libri-light (60,000h) benchmarks with 10min, 1h, 10h, 100h, and 960h fine-tuning subsets. Using a 1B parameter model, HuBERT shows up to 19% and 13% relative WER reduction on the more challenging dev-other and test-other evaluation subsets. Tips: Hubert is a speech model that accepts a float array corresponding to the raw waveform of the speech signal. Hubert model was fine-tuned using connectionist temporal classification (CTC) so the model output has to be decoded using Wav2Vec2CTCTokenizer. This model was contributed by patrickvonplaten. Documentation resources Audio classification task guide Automatic speech recognition task guide HubertConfig class transformers.HubertConfig < source > ( vocab_size = 32 hidden_size = 768 num_hidden_layers = 12 num_attention_heads = 12 intermediate_size = 3072 hidden_act = 'gelu' hidden_dropout = 0.1 activation_dropout = 0.1 attention_dropout = 0.1 feat_proj_layer_norm = True feat_proj_dropout = 0.0 final_dropout = 0.1 layerdrop = 0.1 initializer_range = 0.02 layer_norm_eps = 1e-05 feat_extract_norm = 'group' feat_extract_activation = 'gelu' conv_dim = (512, 512, 512, 512, 512, 512, 512) conv_stride = (5, 2, 2, 2, 2, 2, 2) conv_kernel = (10, 3, 3, 3, 3, 2, 2) conv_bias = False num_conv_pos_embeddings = 128 num_conv_pos_embedding_groups = 16 do_stable_layer_norm = False apply_spec_augment = True mask_time_prob = 0.05 mask_time_length = 10 mask_time_min_masks = 2 mask_feature_prob = 0.0 mask_feature_length = 10 mask_feature_min_masks = 0 ctc_loss_reduction = 'sum' ctc_zero_infinity = False use_weighted_layer_sum = False classifier_proj_size = 256 pad_token_id = 0 bos_token_id = 1 eos_token_id = 2 **kwargs ) Parameters vocab_size (int, optional, defaults to 32) — Vocabulary size of the Hubert model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling HubertModel. Vocabulary size of the model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of HubertModel. hidden_size (int, optional, defaults to 768) — Dimensionality of the encoder layers and the pooler layer. num_hidden_layers (int, optional, defaults to 12) — Number of hidden layers in the Transformer encoder. num_attention_heads (int, optional, defaults to 12) — Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (int, optional, defaults to 3072) — Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder. hidden_act (str or function, optional, defaults to "gelu") — The non-linear activation function (function or string) in the encoder and pooler. If string, "gelu", "relu", "selu" and "gelu_new" are supported. hidden_dropout(float, optional, defaults to 0.1) — The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. activation_dropout (float, optional, defaults to 0.1) — The dropout ratio for activations inside the fully connected layer. attention_dropout(float, optional, defaults to 0.1) — The dropout ratio for the attention probabilities. final_dropout (float, optional, defaults to 0.1) — The dropout probabilitiy for the final projection layer of Wav2Vec2ForCTC. layerdrop (float, optional, defaults to 0.1) — The LayerDrop probability. See the [LayerDrop paper](see https://arxiv.org/abs/1909.11556) for more details. initializer_range (float, optional, defaults to 0.02) — The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (float, optional, defaults to 1e-12) — The epsilon used by the layer normalization layers. feat_extract_norm (str, optional, defaults to "group") — The norm to be applied to 1D convolutional layers in feature encoder. One of "group" for group normalization of only the first 1D convolutional layer or "layer" for layer normalization of all 1D convolutional layers. feat_proj_dropout (float, optional, defaults to 0.0) — The dropout probability for output of the feature encoder. feat_proj_layer_norm (bool, optional, defaults to True) — Whether to apply LayerNorm to the output of the feature encoder. feat_extract_activation (str, optional, defaults to “gelu”) -- The non-linear activation function (function or string) in the 1D convolutional layers of the feature extractor. If string, “gelu”, “relu”, “selu”and“gelu_new”` are supported. conv_dim (Tuple[int], optional, defaults to (512, 512, 512, 512, 512, 512, 512)) — A tuple of integers defining the number of input and output channels of each 1D convolutional layer in the feature encoder. The length of conv_dim defines the number of 1D convolutional layers. conv_stride (Tuple[int], optional, defaults to (5, 2, 2, 2, 2, 2, 2)) — A tuple of integers defining the stride of each 1D convolutional layer in the feature encoder. The length of conv_stride defines the number of convolutional layers and has to match the length of conv_dim. conv_kernel (Tuple[int], optional, defaults to (10, 3, 3, 3, 3, 3, 3)) — A tuple of integers defining the kernel size of each 1D convolutional layer in the feature encoder. The length of conv_kernel defines the number of convolutional layers and has to match the length of conv_dim. conv_bias (bool, optional, defaults to False) — Whether the 1D convolutional layers have a bias. num_conv_pos_embeddings (int, optional, defaults to 128) — Number of convolutional positional embeddings. Defines the kernel size of 1D convolutional positional embeddings layer. num_conv_pos_embedding_groups (int, optional, defaults to 16) — Number of groups of 1D convolutional positional embeddings layer. do_stable_layer_norm (bool, optional, defaults to False) — Whether do apply stable layer norm architecture of the Transformer encoder. do_stable_layer_norm is True corresponds to applying layer norm before the attention layer, whereas do_stable_layer_norm is False corresponds to applying layer norm after the attention layer. apply_spec_augment (bool, optional, defaults to True) — Whether to apply SpecAugment data augmentation to the outputs of the feature encoder. For reference see SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition. mask_time_prob (float, optional, defaults to 0.05) — Percentage (between 0 and 1) of all feature vectors along the time axis which will be masked. The masking procecure generates ”mask_time_problen(time_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_time_prob should be `prob_vector_startmask_time_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`. mask_time_length (int, optional, defaults to 10) — Length of vector span along the time axis. mask_time_min_masks (int, optional, defaults to 2), — The minimum number of masks of length mask_feature_length generated along the time axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_time_prob*len(time_axis)/mask_time_length < mask_time_min_masks” mask_feature_prob (float, optional, defaults to 0.0) — Percentage (between 0 and 1) of all feature vectors along the feature axis which will be masked. The masking procecure generates ”mask_feature_problen(feature_axis)/mask_time_length” independent masks over the axis. If reasoning from the propability of each feature vector to be chosen as the start of the vector span to be masked, mask_feature_prob should be `prob_vector_startmask_feature_length. Note that overlap may decrease the actual percentage of masked vectors. This is only relevant if apply_spec_augment is True`. mask_feature_length (int, optional, defaults to 10) — Length of vector span along the feature axis. mask_feature_min_masks (int, optional, defaults to 0), — The minimum number of masks of length mask_feature_length generated along the feature axis, each time step, irrespectively of mask_feature_prob. Only relevant if ”mask_feature_prob*len(feature_axis)/mask_feature_length < mask_feature_min_masks” ctc_loss_reduction (str, optional, defaults to "sum") — Specifies the reduction to apply to the output of torch.nn.CTCLoss. Only relevant when training an instance of HubertForCTC. ctc_zero_infinity (bool, optional, defaults to False) — Whether to zero infinite losses and the associated gradients of torch.nn.CTCLoss. Infinite losses mainly occur when the inputs are too short to be aligned to the targets. Only relevant when training an instance of HubertForCTC. use_weighted_layer_sum (bool, optional, defaults to False) — Whether to use a weighted average of layer outputs with learned weights. Only relevant when using an instance of HubertForSequenceClassification. classifier_proj_size (int, optional, defaults to 256) — Dimensionality of the projection before token mean-pooling for classification. This is the configuration class to store the configuration of a HubertModel. It is used to instantiate an Hubert model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Hubert facebook/hubert-base-ls960 architecture. Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information. Example: >>> from transformers import HubertModel, HubertConfig >>> >>> configuration = HubertConfig() >>> >>> model = HubertModel(configuration) >>> >>> configuration = model.config HubertModel class transformers.HubertModel < source > ( config: HubertConfig ) Parameters config (HubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare Hubert Model transformer outputting raw hidden-states without any specific head on top. Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None mask_time_indices: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.BaseModelOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as hubert-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. A transformers.modeling_outputs.BaseModelOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs. last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The HubertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoProcessor, HubertModel >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") >>> model = HubertModel.from_pretrained("facebook/hubert-large-ls960-ft") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="pt").input_values >>> hidden_states = model(input_values).last_hidden_state HubertForCTC class transformers.HubertForCTC < source > ( config target_lang: typing.Optional[str] = None ) Parameters config (HubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Hubert Model with a language modeling head on top for Connectionist Temporal Classification (CTC). Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.CausalLMOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as hubert-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size, target_length), optional) — Labels for connectionist temporal classification. Note that target_length has to be smaller or equal to the sequence length of the output logits. Indices are selected in [-100, 0, ..., config.vocab_size - 1]. All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size - 1]. A transformers.modeling_outputs.CausalLMOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The HubertForCTC forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoProcessor, HubertForCTC >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") >>> model = HubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft") >>> >>> inputs = processor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_ids = torch.argmax(logits, dim=-1) >>> >>> transcription = processor.batch_decode(predicted_ids) >>> transcription[0] 'MISTER QUILTER IS THE APOSTLE OF THE MIDDLE CLASSES AND WE ARE GLAD TO WELCOME HIS GOSPEL' >>> inputs["labels"] = processor(text=dataset[0]["text"], return_tensors="pt").input_ids >>> >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 22.68 HubertForSequenceClassification class transformers.HubertForSequenceClassification < source > ( config ) Parameters config (HubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. Hubert Model with a sequence classification head on top (a linear layer over the pooled output) for tasks like SUPERB Keyword Spotting. Hubert was proposed in HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units by Wei-Ning Hsu, Benjamin Bolte, Yao-Hung Hubert Tsai, Kushal Lakhotia, Ruslan Salakhutdinov, Abdelrahman Mohamed. This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving etc.). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior. forward < source > ( input_values: typing.Optional[torch.Tensor] attention_mask: typing.Optional[torch.Tensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None labels: typing.Optional[torch.Tensor] = None ) → transformers.modeling_outputs.SequenceClassifierOutput or tuple(torch.FloatTensor) Parameters input_values (torch.FloatTensor of shape (batch_size, sequence_length)) — Float values of input raw speech waveform. Values can be obtained by loading a .flac or .wav audio file into an array of type List[float] or a numpy.ndarray, e.g. via the soundfile library (pip install soundfile). To prepare the array into input_values, the AutoProcessor should be used for padding and conversion into a tensor of type torch.FloatTensor. See Wav2Vec2Processor.call() for details. attention_mask (torch.LongTensor of shape (batch_size, sequence_length), optional) — Mask to avoid performing convolution and attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? attention_mask should only be passed if the corresponding processor has config.return_attention_mask == True. For all models whose processor has config.return_attention_mask == False, such as hubert-base, attention_mask should not be passed to avoid degraded performance when doing batched inference. For such models input_values should simply be padded with 0 and passed without attention_mask. Be aware that these models also yield slightly different results depending on whether input_values is padded or not. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. labels (torch.LongTensor of shape (batch_size,), optional) — Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy). A transformers.modeling_outputs.SequenceClassifierOutput or a tuple of torch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs. loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) — Classification (or regression if config.num_labels==1) loss. logits (torch.FloatTensor of shape (batch_size, config.num_labels)) — Classification (or regression if config.num_labels==1) scores (before SoftMax). hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of torch.FloatTensor (one for the output of the embeddings, if the model has an embedding layer, + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the optional initial embedding outputs. attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The HubertForSequenceClassification forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoFeatureExtractor, HubertForSequenceClassification >>> from datasets import load_dataset >>> import torch >>> dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation") >>> dataset = dataset.sort("id") >>> sampling_rate = dataset.features["audio"].sampling_rate >>> feature_extractor = AutoFeatureExtractor.from_pretrained("superb/hubert-base-superb-ks") >>> model = HubertForSequenceClassification.from_pretrained("superb/hubert-base-superb-ks") >>> >>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt") >>> with torch.no_grad(): ... logits = model(**inputs).logits >>> predicted_class_ids = torch.argmax(logits, dim=-1).item() >>> predicted_label = model.config.id2label[predicted_class_ids] >>> predicted_label '_unknown_' >>> >>> target_label = model.config.id2label[0] >>> inputs["labels"] = torch.tensor([model.config.label2id[target_label]]) >>> loss = model(**inputs).loss >>> round(loss.item(), 2) 8.53 TFHubertModel class transformers.TFHubertModel < source > ( *args **kwargs ) Parameters config (HubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. The bare TFHubert Model transformer outputing raw hidden-states without any specific head on top. This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_values only and nothing else: model(input_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_values, attention_mask]) or model([input_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_values": input_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_values: tf.Tensor attention_mask: tf.Tensor | None = None token_type_ids: tf.Tensor | None = None position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: bool = False ) → transformers.modeling_tf_outputs.TFBaseModelOutput or tuple(tf.Tensor) Parameters input_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape ({0}), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (np.ndarray or tf.Tensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_values you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_values indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). A transformers.modeling_tf_outputs.TFBaseModelOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs. last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)) — Sequence of hidden-states at the output of the last layer of the model. hidden_states (tuple(tf.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFHubertModel forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> from transformers import AutoProcessor, TFHubertModel >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") >>> model = TFHubertModel.from_pretrained("facebook/hubert-large-ls960-ft") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="tf").input_values >>> hidden_states = model(input_values).last_hidden_state TFHubertForCTC class transformers.TFHubertForCTC < source > ( *args **kwargs ) Parameters config (HubertConfig) — Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights. TFHubert Model with a language modeling head on top for Connectionist Temporal Classification (CTC). This model inherits from TFPreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.) This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior. TensorFlow models and layers in transformers accept two formats as input: having all inputs as keyword arguments (like PyTorch models), or having all inputs as a list, tuple or dict in the first positional argument. The reason the second format is supported is that Keras methods prefer this format when passing inputs to models and layers. Because of this support, when using methods like model.fit() things should “just work” for you - just pass your inputs and labels in any format that model.fit() supports! If, however, you want to use the second format outside of Keras methods like fit() and predict(), such as when creating your own layers or models with the Keras Functional API, there are three possibilities you can use to gather all the input Tensors in the first positional argument: a single Tensor with input_values only and nothing else: model(input_values) a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_values, attention_mask]) or model([input_values, attention_mask, token_type_ids]) a dictionary with one or several input Tensors associated to the input names given in the docstring: model({"input_values": input_values, "token_type_ids": token_type_ids}) Note that when creating models and layers with subclassing then you don’t need to worry about any of this, as you can just pass inputs like you would to any other Python function! call < source > ( input_values: tf.Tensor attention_mask: tf.Tensor | None = None token_type_ids: tf.Tensor | None = None position_ids: tf.Tensor | None = None head_mask: tf.Tensor | None = None inputs_embeds: tf.Tensor | None = None output_attentions: Optional[bool] = None labels: tf.Tensor | None = None output_hidden_states: Optional[bool] = None return_dict: Optional[bool] = None training: Optional[bool] = False ) → transformers.modeling_tf_outputs.TFCausalLMOutput or tuple(tf.Tensor) Parameters input_values (np.ndarray, tf.Tensor, List[tf.Tensor] Dict[str, tf.Tensor] or Dict[str, np.ndarray] and each example must have the shape ({0})) — Indices of input sequence tokens in the vocabulary. Indices can be obtained using AutoTokenizer. See PreTrainedTokenizer.call() and PreTrainedTokenizer.encode() for details. What are input IDs? attention_mask (np.ndarray or tf.Tensor of shape ({0}), optional) — Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are not masked, 0 for tokens that are masked. What are attention masks? token_type_ids (np.ndarray or tf.Tensor of shape ({0}), optional) — Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token. What are token type IDs? position_ids (np.ndarray or tf.Tensor of shape ({0}), optional) — Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1]. What are position IDs? head_mask (np.ndarray or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional) — Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked. inputs_embeds (np.ndarray or tf.Tensor of shape ({0}, hidden_size), optional) — Optionally, instead of passing input_values you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_values indices into associated vectors than the model’s internal embedding lookup matrix. output_attentions (bool, optional) — Whether or not to return the attentions tensors of all attention layers. See attentions under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. output_hidden_states (bool, optional) — Whether or not to return the hidden states of all layers. See hidden_states under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead. return_dict (bool, optional) — Whether or not to return a ModelOutput instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True. training (bool, optional, defaults to `False“) — Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation). labels (tf.Tensor or np.ndarray of shape (batch_size, sequence_length), optional) — Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_values docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size] A transformers.modeling_tf_outputs.TFCausalLMOutput or a tuple of tf.Tensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (HubertConfig) and inputs. loss (tf.Tensor of shape (n,), optional, where n is the number of non-masked labels, returned when labels is provided) — Language modeling loss (for next-token prediction). logits (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)) — Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True) — Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size). Hidden-states of the model at the output of each layer plus the initial embedding outputs. attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True) — Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length). Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads. The TFHubertForCTC forward method, overrides the __call__ special method. Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them. Example: >>> import tensorflow as tf >>> from transformers import AutoProcessor, TFHubertForCTC >>> from datasets import load_dataset >>> import soundfile as sf >>> processor = AutoProcessor.from_pretrained("facebook/hubert-large-ls960-ft") >>> model = TFHubertForCTC.from_pretrained("facebook/hubert-large-ls960-ft") >>> def map_to_array(batch): ... speech, _ = sf.read(batch["file"]) ... batch["speech"] = speech ... return batch >>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") >>> ds = ds.map(map_to_array) >>> input_values = processor(ds["speech"][0], return_tensors="tf").input_values >>> logits = model(input_values).logits >>> predicted_ids = tf.argmax(logits, axis=-1) >>> transcription = processor.decode(predicted_ids[0]) >>> >>> target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST" >>> >>> labels = processor(text=transcription, return_tensors="tf").input_values >>> loss = model(input_values, labels=labels).loss