ELECTRA

Overview

The ELECTRA model was proposed in the paper. ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators. ELECTRA is a new pre-training approach which trains two transformer models: the generator and the discriminator. The generator’s role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we’re interested in, tries to identify which tokens were replaced by the generator in the sequence.

The abstract from the paper is the following:

Masked language modeling (MLM) pre-training methods such as BERT corrupt the input by replacing some tokens with [MASK] and then train a model to reconstruct the original tokens. While they produce good results when transferred to downstream NLP tasks, they generally require large amounts of compute to be effective. As an alternative, we propose a more sample-efficient pre-training task called replaced token detection. Instead of masking the input, our approach corrupts it by replacing some tokens with plausible alternatives sampled from a small generator network. Then, instead of training a model that predicts the original identities of the corrupted tokens, we train a discriminative model that predicts whether each token in the corrupted input was replaced by a generator sample or not. Thorough experiments demonstrate this new pre-training task is more efficient than MLM because the task is defined over all input tokens rather than just the small subset that was masked out. As a result, the contextual representations learned by our approach substantially outperform the ones learned by BERT given the same model size, data, and compute. The gains are particularly strong for small models; for example, we train a model on one GPU for 4 days that outperforms GPT (trained using 30x more compute) on the GLUE natural language understanding benchmark. Our approach also works well at scale, where it performs comparably to RoBERTa and XLNet while using less than 1/4 of their compute and outperforms them when using the same amount of compute.

Tips:

  • ELECTRA is the pre-training approach, therefore there is nearly no changes done to the underlying model: BERT. The only change is the separation of the embedding size and the hidden size -> The embedding size is generally smaller, while the hidden size is larger. An additional projection layer (linear) is used to project the embeddings from their embedding size to the hidden size. In the case where the embedding size is the same as the hidden size, no projection layer is used.

  • The ELECTRA checkpoints saved using Google Research’s implementation contain both the generator and discriminator. The conversion script requires the user to name which model to export into the correct architecture. Once converted to the HuggingFace format, these checkpoints may be loaded into all available ELECTRA models, however. This means that the discriminator may be loaded in the ElectraForMaskedLM model, and the generator may be loaded in the ElectraForPreTraining model (the classification head will be randomly initialized as it doesn’t exist in the generator).

The original code can be found here.

ElectraConfig

class transformers.ElectraConfig(vocab_size=30522, embedding_size=128, hidden_size=256, num_hidden_layers=12, num_attention_heads=4, intermediate_size=1024, hidden_act='gelu', hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, summary_type='first', summary_use_proj=True, summary_activation='gelu', summary_last_dropout=0.1, pad_token_id=0, **kwargs)[source]

This is the configuration class to store the configuration of a ElectraModel. It is used to instantiate an ELECTRA model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the ELECTRA google/electra-small-discriminator architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 30522) – Vocabulary size of the ELECTRA model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of ElectraModel.

  • embedding_size (int, optional, defaults to 128) – Dimensionality of the encoder layers and the pooler layer.

  • hidden_size (int, optional, defaults to 256) – Dimensionality of the encoder layers and the pooler layer.

  • num_hidden_layers (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • num_attention_heads (int, optional, defaults to 4) – Number of attention heads for each attention layer in the Transformer encoder.

  • intermediate_size (int, optional, defaults to 1024) – Dimensionality of the “intermediate” (i.e., feed-forward) layer in the Transformer encoder.

  • hidden_act (str or function, optional, defaults to “gelu”) – The non-linear activation function (function or string) in the encoder and pooler. If string, “gelu”, “relu”, “swish” and “gelu_new” are supported.

  • hidden_dropout_prob (float, optional, defaults to 0.1) – The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler.

  • attention_probs_dropout_prob (float, optional, defaults to 0.1) – The dropout ratio for the attention probabilities.

  • max_position_embeddings (int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

  • type_vocab_size (int, optional, defaults to 2) – The vocabulary size of the token_type_ids passed into ElectraModel.

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • layer_norm_eps (float, optional, defaults to 1e-12) – The epsilon used by the layer normalization layers.

  • summary_type (string, optional, defaults to “first”) –

    Argument used when doing sequence summary. Used in for the multiple choice head in ElectraForMultipleChoice. Is one of the following options:

    • ’last’ => take the last token hidden state (like XLNet)

    • ’first’ => take the first token hidden state (like Bert)

    • ’mean’ => take the mean of all tokens hidden states

    • ’cls_index’ => supply a Tensor of classification token position (GPT/GPT-2)

    • ’attn’ => Not implemented now, use multi-head attention

  • summary_use_proj (boolean, optional, defaults to True) – Argument used when doing sequence summary. Used in for the multiple choice head in ElectraForMultipleChoice. Add a projection after the vector extraction

  • summary_activation (string or None, optional, defaults to None) – Argument used when doing sequence summary. Used in for the multiple choice head in ElectraForMultipleChoice. ‘gelu’ => add a gelu activation to the output, Other => no activation.

  • summary_last_dropout (float, optional, defaults to 0.0) – Argument used when doing sequence summary. Used in for the multiple choice head in ElectraForMultipleChoice. Add a dropout after the projection and activation

Example:

>>> from transformers import ElectraModel, ElectraConfig

>>> # Initializing a ELECTRA electra-base-uncased style configuration
>>> configuration = ElectraConfig()

>>> # Initializing a model from the electra-base-uncased style configuration
>>> model = ElectraModel(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config

ElectraTokenizer

class transformers.ElectraTokenizer(vocab_file, do_lower_case=True, do_basic_tokenize=True, never_split=None, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', tokenize_chinese_chars=True, **kwargs)[source]

Constructs an Electra tokenizer. ElectraTokenizer is identical to BertTokenizer and runs end-to-end tokenization: punctuation splitting + wordpiece.

Refer to superclass BertTokenizer for usage examples and documentation concerning parameters.

ElectraTokenizerFast

class transformers.ElectraTokenizerFast(vocab_file, do_lower_case=True, unk_token='[UNK]', sep_token='[SEP]', pad_token='[PAD]', cls_token='[CLS]', mask_token='[MASK]', clean_text=True, tokenize_chinese_chars=True, strip_accents=None, wordpieces_prefix='##', **kwargs)[source]

Constructs a “Fast” Electra Fast tokenizer (backed by HuggingFace’s tokenizers library).

ElectraTokenizerFast is identical to BertTokenizerFast and runs end-to-end tokenization: punctuation splitting + wordpiece.

Refer to superclass BertTokenizerFast for usage examples and documentation concerning parameters.

ElectraModel

class transformers.ElectraModel(config)[source]

The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different.Both the generator and discriminator checkpoints may be loaded into this model. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

config_class

alias of transformers.configuration_electra.ElectraConfig

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None)[source]

The ElectraModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

Returns

last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)):

Sequence of hidden-states at the output of the last layer of the model.

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, ElectraModel
>>> import torch

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = ElectraModel.from_pretrained('google/electra-small-discriminator')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> outputs = model(**inputs)

>>> last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
get_input_embeddings()[source]

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

set_input_embeddings(value)[source]

Set model’s input embeddings

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

ElectraForPreTraining

class transformers.ElectraForPreTraining(config)[source]

Electra model with a binary classification head on top as used during pre-training for identifying generated tokens.

It is recommended to load the discriminator checkpoint into that model. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None)[source]

The ElectraForPreTraining forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for computing the ELECTRA loss. Input should be a sequence of tokens (see input_ids docstring) Indices should be in [0, 1]. 0 indicates the token is an original token, 1 indicates the token was replaced.

Returns

loss (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Total loss of the ELECTRA objective.

scores (torch.FloatTensor of shape (batch_size, sequence_length))

Prediction scores of the head (scores for each token before SoftMax).

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Examples:

>>> from transformers import ElectraTokenizer, ElectraForPreTraining
>>> import torch

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = ElectraForPreTraining.from_pretrained('google/electra-small-discriminator')

>>> input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
>>> scores = model(input_ids)[0]

ElectraForMaskedLM

class transformers.ElectraForMaskedLM(config)[source]

Electra model with a language modeling head on top.

Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, **kwargs)[source]

The ElectraForMaskedLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

  • kwargs (Dict[str, any], optional, defaults to {}) – Used to hide legacy arguments that have been deprecated.

Returns

masked_lm_loss (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Masked language modeling loss.

prediction_scores (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size))

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, ElectraForMaskedLM
>>> import torch

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-generator')
>>> model = ElectraForMaskedLM.from_pretrained('google/electra-small-generator')

>>> input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt")["input_ids"]

>>> outputs = model(input_ids, labels=input_ids)
>>> loss, prediction_scores = outputs[:2]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

ElectraForSequenceClassification

class transformers.ElectraForSequenceClassification(config)[source]

ELECTRA Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None)[source]

The ElectraForSequenceClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when label is provided):

Classification (or regression if config.num_labels==1) loss.

logits (torch.FloatTensor of shape (batch_size, config.num_labels)):

Classification (or regression if config.num_labels==1) scores (before SoftMax).

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (BertConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, ElectraForSequenceClassification
>>> import torch

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = ElectraForSequenceClassification.from_pretrained('google/electra-small-discriminator')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
>>> outputs = model(**inputs, labels=labels)
>>> loss, logits = outputs[:2]

ElectraForTokenClassification

class transformers.ElectraForTokenClassification(config)[source]

Electra model with a token classification head on top.

Both the discriminator and generator may be loaded into this model. This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None)[source]

The ElectraForTokenClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided) :

Classification loss.

scores (torch.FloatTensor of shape (batch_size, sequence_length, config.num_labels))

Classification scores (before SoftMax).

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, ElectraForTokenClassification
>>> import torch

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = ElectraForTokenClassification.from_pretrained('google/electra-small-discriminator')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> labels = torch.tensor([1] * inputs["input_ids"].size(1)).unsqueeze(0)  # Batch size 1

>>> outputs = model(**inputs, labels=labels)
>>> loss, scores = outputs[:2]

ElectraForQuestionAnswering

class transformers.ElectraForQuestionAnswering(config)[source]

ELECTRA Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

config_class

alias of transformers.configuration_electra.ElectraConfig

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None)[source]

The ElectraForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • encoder_hidden_states (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if the model is configured as a decoder.

  • encoder_attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) – Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in the cross-attention if the model is configured as a decoder. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • start_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (torch.LongTensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided):

Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

start_scores (torch.FloatTensor of shape (batch_size, sequence_length,)):

Span-start scores (before SoftMax).

end_scores (torch.FloatTensor of shape (batch_size, sequence_length,)):

Span-end scores (before SoftMax).

hidden_states (tuple(torch.FloatTensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, ElectraForQuestionAnswering
>>> import torch

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = ElectraForQuestionAnswering.from_pretrained('google/electra-small-discriminator')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
>>> start_positions = torch.tensor([1])
>>> end_positions = torch.tensor([3])

>>> outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
>>> loss, start_scores, end_scores = outputs[:3]

TFElectraModel

class transformers.TFElectraModel(*args, **kwargs)[source]

The bare Electra Model transformer outputting raw hidden-states without any specific head on top. Identical to the BERT model except that it uses an additional linear layer between the embedding layer and the encoder if the hidden size and embedding size are different.Both the generator and discriminator checkpoints may be loaded into this model. This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFElectraModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (Numpy array or tf.Tensor of shape {0}, optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

Returns

last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)):

Sequence of hidden-states at the output of the last layer of the model.

hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length):

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, TFElectraModel
>>> import tensorflow as tf

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = TFElectraModel.from_pretrained('google/electra-small-discriminator')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> outputs = model(inputs)

>>> last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple

TFElectraForPreTraining

class transformers.TFElectraForPreTraining(*args, **kwargs)[source]

Electra model with a binary classification head on top as used during pre-training for identifying generated tokens.

Even though both the discriminator and generator may be loaded into this model, the discriminator is the only model of the two to have the correct classification head to be used for this model. This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, training=False)[source]

The TFElectraForPreTraining forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (Numpy array or tf.Tensor of shape {0}, optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

Returns

scores (Numpy array or tf.Tensor of shape (batch_size, sequence_length, config.num_labels)):

Prediction scores of the head (scores for each token before SoftMax).

hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length):

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Examples:

import tensorflow as tf
from transformers import ElectraTokenizer, TFElectraForPreTraining

tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
model = TFElectraForPreTraining.from_pretrained('google/electra-small-discriminator')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
scores = outputs[0]

TFElectraForMaskedLM

class transformers.TFElectraForMaskedLM(*args, **kwargs)[source]

Electra model with a language modeling head on top.

Even though both the discriminator and generator may be loaded into this model, the generator is the only model of the two to have been trained for the masked language modeling task. This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, training=False)[source]

The TFElectraForMaskedLM forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (Numpy array or tf.Tensor of shape {0}, optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

Returns

prediction_scores (Numpy array or tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length):

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example::
>>> from transformers import ElectraTokenizer, TFElectraForMaskedLM
>>> import tensorflow as tf
>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-generator')
>>> model = TFElectraForMaskedLM.from_pretrained('google/electra-small-generator')
>>> input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]  # Batch size 1
>>> outputs = model(input_ids)
>>> prediction_scores = outputs[0]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

tf.keras.layers.Layer

TFElectraForTokenClassification

class transformers.TFElectraForTokenClassification(*args, **kwargs)[source]

Electra model with a token classification head on top.

Both the discriminator and generator may be loaded into this model. This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, labels=None, training=False)[source]

The TFElectraForTokenClassification forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (Numpy array or tf.Tensor of shape {0}, optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • labels (tf.Tensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for computing the token classification loss. Indices should be in [0, ..., config.num_labels - 1].

Returns

scores (Numpy array or tf.Tensor of shape (batch_size, sequence_length, config.num_labels)):

Classification scores (before SoftMax).

hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length):

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (ElectraConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, TFElectraForTokenClassification
>>> import tensorflow as tf

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = TFElectraForTokenClassification.from_pretrained('google/electra-small-discriminator')

>>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
>>> input_ids = inputs["input_ids"]
>>> inputs["labels"] = tf.reshape(tf.constant([1] * tf.size(input_ids).numpy()), (-1, tf.size(input_ids))) # Batch size 1

>>> outputs = model(inputs)
>>> loss, scores = outputs[:2]

TFElectraForQuestionAnswering

class transformers.TFElectraForQuestionAnswering(*args, **kwargs)[source]

Electra Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). This model is a tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (ElectraConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, start_positions=None, end_positions=None, training=False)[source]

The TFElectraForQuestionAnswering forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.ElectraTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.__call__() for details.

    What are input IDs?

  • attention_mask (Numpy array or tf.Tensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • position_ids (Numpy array or tf.Tensor of shape {0}, optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (Numpy array or tf.Tensor of shape (batch_size, sequence_length, embedding_dim), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

  • output_attentions (bool, optional, defaults to None) – If set to True, the attentions tensors of all attention layers are returned. See attentions under returned tensors for more detail.

  • start_positions (tf.Tensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

  • end_positions (tf.Tensor of shape (batch_size,), optional, defaults to None) – Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Returns

start_scores (Numpy array or tf.Tensor of shape (batch_size, sequence_length,)):

Span-start scores (before SoftMax).

end_scores (Numpy array or tf.Tensor of shape (batch_size, sequence_length,)):

Span-end scores (before SoftMax).

hidden_states (tuple(tf.Tensor), optional, returned when output_hidden_states=True is passed or when config.output_hidden_states=True):

tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when output_attentions=True is passed or when config.output_attentions=True):

tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length):

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (BertConfig) and inputs

Example:

>>> from transformers import ElectraTokenizer, TFElectraForQuestionAnswering
>>> import tensorflow as tf

>>> tokenizer = ElectraTokenizer.from_pretrained('google/electra-small-discriminator')
>>> model = TFElectraForQuestionAnswering.from_pretrained('google/electra-small-discriminator')

>>> question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
>>> input_dict = tokenizer(question, text, return_tensors='tf')
>>> start_scores, end_scores = model(input_dict)

>>> all_tokens = tokenizer.convert_ids_to_tokens(input_dict["input_ids"].numpy()[0])
>>> answer = ' '.join(all_tokens[tf.math.argmax(start_scores, 1)[0] : tf.math.argmax(end_scores, 1)[0]+1])