ALBERT

AlbrtConfig

class transformers.AlbertConfig(vocab_size_or_config_json_file=30000, embedding_size=128, hidden_size=4096, num_hidden_layers=12, num_hidden_groups=1, num_attention_heads=64, intermediate_size=16384, inner_group_num=1, hidden_act='gelu_new', hidden_dropout_prob=0, attention_probs_dropout_prob=0, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, **kwargs)[source]

Configuration for AlbertModel.

The default settings match the configuration of model albert_xxlarge.

AlbertTokenizer

class transformers.AlbertTokenizer(vocab_file, do_lower_case=True, remove_space=True, keep_accents=False, bos_token='[CLS]', eos_token='[SEP]', unk_token='<unk>', sep_token='[SEP]', pad_token='<pad>', cls_token='[CLS]', mask_token='[MASK]>', **kwargs)[source]

SentencePiece based tokenizer. Peculiarities:

build_inputs_with_special_tokens(token_ids_0, token_ids_1=None)[source]

Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and adding special tokens. An ALBERT sequence has the following format:

single sequence: [CLS] X [SEP] pair of sequences: [CLS] A [SEP] B [SEP]

convert_tokens_to_string(tokens)[source]

Converts a sequence of tokens (strings for sub-words) in a single string.

create_token_type_ids_from_sequences(token_ids_0, token_ids_1=None)[source]

Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT sequence pair mask has the following format: 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 | first sequence | second sequence

if token_ids_1 is None, only returns the first portion of the mask (0’s).

get_special_tokens_mask(token_ids_0, token_ids_1=None, already_has_special_tokens=False)[source]

Retrieves sequence ids from a token list that has no special tokens added. This method is called when adding special tokens using the tokenizer prepare_for_model or encode_plus methods.

Parameters
  • token_ids_0 – list of ids (must not contain special tokens)

  • token_ids_1 – Optional list of ids (must not contain special tokens), necessary when fetching sequence ids for sequence pairs

  • already_has_special_tokens – (default False) Set to True if the token list is already formated with special tokens for the model

Returns

0 for a special token, 1 for a sequence token.

Return type

A list of integers in the range [0, 1]

save_vocabulary(save_directory)[source]

Save the sentencepiece vocabulary (copy original file) and special tokens file to a directory.

property vocab_size

Size of the base vocabulary (without the added tokens)

AlbertModel

class transformers.AlbertModel(config)[source]

The bare ALBERT Model transformer outputting raw hidden-states without any specific head on top. The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, BERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
last_hidden_state: torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)

Sequence of hidden-states at the output of the last layer of the model.

pooler_output: torch.FloatTensor of shape (batch_size, hidden_size)

Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Bert pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

config_class

alias of transformers.configuration_albert.AlbertConfig

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_input_embeddings()[source]

Get model’s input embeddings

load_tf_weights(config, tf_checkpoint_path)

Load tf checkpoints in a pytorch model.

set_input_embeddings(value)[source]

Set model’s input embeddings

AlbertForMaskedLM

class transformers.AlbertForMaskedLM(config)[source]

Bert Model with a language modeling head on top. The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, BERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

masked_lm_labels: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Labels for computing the masked language modeling loss. Indices should be in [-1, 0, ..., config.vocab_size] (see input_ids docstring) Tokens with indices set to -1 are ignored (masked), the loss is only computed for the tokens with labels in [0, ..., config.vocab_size]

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when masked_lm_labels is provided) torch.FloatTensor of shape (1,):

Masked language modeling loss.

prediction_scores: torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, masked_lm_labels=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

get_output_embeddings()[source]

Get model’s output embeddings Return None if the model doesn’t have output embeddings

tie_weights()[source]

Make sure we are sharing the input and output embeddings. Export to TorchScript can’t handle parameter sharing so we are cloning them instead.

AlbertForSequenceClassification

class transformers.AlbertForSequenceClassification(config)[source]

Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, BERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

labels: (optional) torch.LongTensor of shape (batch_size,):

Labels for computing the sequence classification/regression loss. Indices should be in [0, ..., config.num_labels - 1]. If config.num_labels == 1 a regression loss is computed (Mean-Square loss), If config.num_labels > 1 a classification loss is computed (Cross-Entropy).

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Classification (or regression if config.num_labels==1) loss.

logits: torch.FloatTensor of shape (batch_size, config.num_labels)

Classification (or regression if config.num_labels==1) scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForSequenceClassification.from_pretrained('albert-base-v2')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0)  # Batch size 1
labels = torch.tensor([1]).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=labels)
loss, logits = outputs[:2]
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

AlbertForQuestionAnswering

class transformers.AlbertForQuestionAnswering(config)[source]

Albert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear layers on top of the hidden-states output to compute span start logits and span end logits). The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: torch.LongTensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, BERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) torch.FloatTensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) torch.LongTensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

start_positions: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

end_positions: (optional) torch.LongTensor of shape (batch_size,):

Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (sequence_length). Position outside of the sequence are not taken into account for computing the loss.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
loss: (optional, returned when labels is provided) torch.FloatTensor of shape (1,):

Total span extraction loss is the sum of a Cross-Entropy for the start and end positions.

start_scores: torch.FloatTensor of shape (batch_size, sequence_length,)

Span-start scores (before SoftMax).

end_scores: torch.FloatTensor of shape (batch_size, sequence_length,)

Span-end scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of torch.FloatTensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_text = "[CLS] " + question + " [SEP] " + text + " [SEP]"
input_ids = tokenizer.encode(input_text)
token_type_ids = [0 if i <= input_ids.index(102) else 1 for i in range(len(input_ids))]
start_scores, end_scores = model(torch.tensor([input_ids]), token_type_ids=torch.tensor([token_type_ids]))
all_tokens = tokenizer.convert_ids_to_tokens(input_ids)
print(' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]))
# a nice puppet
forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

TFAlbertModel

class transformers.TFAlbertModel(config, **kwargs)[source]

The bare Albert Model transformer outputing raw hidden-states without any specific head on top. The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, ALBERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `ALBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
last_hidden_state: tf.Tensor of shape (batch_size, sequence_length, hidden_size)

Sequence of hidden-states at the output of the last layer of the model.

pooler_output: tf.Tensor of shape (batch_size, hidden_size)

Last layer hidden-state of the first token of the sequence (classification token) further processed by a Linear layer and a Tanh activation function. The Linear layer weights are trained from the next sentence prediction (classification) objective during Albert pretraining. This output is usually not a good summary of the semantic content of the input, you’re often better with averaging or pooling the sequence of hidden-states for the whole input sequence.

hidden_states: (optional, returned when config.output_hidden_states=True)

list of tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import AlbertTokenizer, TFAlbertModel

tokenizer = AlbertTokenizer.from_pretrained('bert-base-uncased')
model = TFAlbertModel.from_pretrained('bert-base-uncased')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
call(inputs, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, training=False)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

get_input_embeddings()[source]

Get model’s input embeddings

TFAlbertForMaskedLM

class transformers.TFAlbertForMaskedLM(config, *inputs, **kwargs)[source]

Albert Model with a language modeling head on top. The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, ALBERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `ALBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
prediction_scores: Numpy array or tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of Numpy array or tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of Numpy array or tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import AlbertTokenizer, TFAlbertForMaskedLM

tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = TFAlbertForMaskedLM.from_pretrained('albert-base-v2')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
prediction_scores = outputs[0]
call(inputs, **kwargs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.

get_output_embeddings()[source]

Get model’s output embeddings Return None if the model doesn’t have output embeddings

TFAlbertForSequenceClassification

class transformers.TFAlbertForSequenceClassification(config, *inputs, **kwargs)[source]

Albert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled output) e.g. for GLUE tasks. The ALBERT model was proposed in ALBERT: A Lite BERT for Self-supervised Learning of Language Representations by Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut. It presents two parameter-reduction techniques to lower memory consumption and increase the trainig speed of BERT.

This model is a tf.keras.Model tf.keras.Model sub-class. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.

Note on the model inputs:

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is usefull when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: `model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:

    model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associaed to the input names given in the docstring:

    model({‘input_ids’: input_ids, ‘token_type_ids’: token_type_ids})

Parameters

config (AlbertConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

Inputs:
input_ids: Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of input sequence tokens in the vocabulary. To match pre-training, ALBERT input sequence should be formatted with [CLS] and [SEP] tokens as follows:

  1. For sequence pairs:

    tokens:         [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]

    token_type_ids:   0   0  0    0    0     0       0   0   1  1  1  1   1   1

  2. For single sequences:

    tokens:         [CLS] the dog is hairy . [SEP]

    token_type_ids:   0   0   0   0  0     0   0

Albert is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

Indices can be obtained using transformers.AlbertTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.convert_tokens_to_ids() for details.

attention_mask: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

token_type_ids: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token (see `ALBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding`_ for more details).

position_ids: (optional) Numpy array or tf.Tensor of shape (batch_size, sequence_length):

Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

head_mask: (optional) Numpy array or tf.Tensor of shape (num_heads,) or (num_layers, num_heads):

Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

Outputs: Tuple comprising various elements depending on the configuration (config) and inputs:
logits: Numpy array or tf.Tensor of shape (batch_size, config.num_labels)

Classification (or regression if config.num_labels==1) scores (before SoftMax).

hidden_states: (optional, returned when config.output_hidden_states=True)

list of Numpy array or tf.Tensor (one for the output of each layer + the output of the embeddings) of shape (batch_size, sequence_length, hidden_size): Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions: (optional, returned when config.output_attentions=True)

list of Numpy array or tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length): Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Examples:

import tensorflow as tf
from transformers import AlbertTokenizer, TFAlbertForSequenceClassification

tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = TFAlbertForSequenceClassification.from_pretrained('albert-base-v2')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute"))[None, :]  # Batch size 1
outputs = model(input_ids)
logits = outputs[0]
call(inputs, **kwargs)[source]

Calls the model on new inputs.

In this case call just reapplies all ops in the graph to the new inputs (e.g. build a new computational graph from the provided inputs).

Parameters
  • inputs – A tensor or list of tensors.

  • training – Boolean or boolean scalar tensor, indicating whether to run the Network in training mode or inference mode.

  • mask – A mask or list of masks. A mask can be either a tensor or None (no mask).

Returns

A tensor if there is a single output, or a list of tensors if there are more than one outputs.