OpenAI GPT

Overview

OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It’s a causal (unidirectional) transformer pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus.

The abstract from the paper is the following:

Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied.

Tips:

  • GPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.

  • GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script.

Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT is one of them.

The original code can be found here.

OpenAIGPTConfig

class transformers.OpenAIGPTConfig(vocab_size=40478, n_positions=512, n_ctx=512, n_embd=768, n_layer=12, n_head=12, afn='gelu', resid_pdrop=0.1, embd_pdrop=0.1, attn_pdrop=0.1, layer_norm_epsilon=1e-05, initializer_range=0.02, predict_special_tokens=True, summary_type='cls_index', summary_use_proj=True, summary_activation=None, summary_proj_to_labels=True, summary_first_dropout=0.1, **kwargs)[source]

This is the configuration class to store the configuration of a OpenAIGPTModel. It is used to instantiate an GPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT architecture from OpenAI.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Parameters
  • vocab_size (int, optional, defaults to 40478) – Vocabulary size of the GPT model. Defines the different tokens that can be represented by the inputs_ids passed to the forward method of CTRLModel.

  • n_positions (int, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).

  • n_ctx (int, optional, defaults to 512) – Dimensionality of the causal mask (usually same as n_positions).

  • n_embd (int, optional, defaults to 768) – Dimensionality of the embeddings and hidden states.

  • n_layer (int, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.

  • n_head (int, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.

  • afn (str or function, optional, defaults to “gelu”) – The non-linear activation function (function or string) in the encoder and pooler. If string, “gelu”, “relu”, “swish” and “gelu_new” are supported.

  • resid_pdrop (float, optional, defaults to 0.1) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.

  • embd_pdrop (int, optional, defaults to 0.1) – The dropout ratio for the embeddings.

  • attn_pdrop (float, optional, defaults to 0.1) – The dropout ratio for the attention.

  • layer_norm_epsilon (float, optional, defaults to 1e-5) – The epsilon to use in the layer normalization layers

  • initializer_range (float, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.

  • predict_special_tokens (boolean, optional, defaults to True) – Whether special tokens should be predicted when the model is has a language modeling head.

  • summary_type (string, optional, defaults to “cls_index”) –

    Argument used when doing sequence summary. Used in for the multiple choice head in OpenAIGPTDoubleHeadsModel. Is one of the following options:

    • ’last’ => take the last token hidden state (like XLNet)

    • ’first’ => take the first token hidden state (like Bert)

    • ’mean’ => take the mean of all tokens hidden states

    • ’cls_index’ => supply a Tensor of classification token position (GPT/GPT-2)

    • ’attn’ => Not implemented now, use multi-head attention

  • summary_use_proj (boolean, optional, defaults to True) – Argument used when doing sequence summary. Used in for the multiple choice head in OpenAIGPTDoubleHeadsModel. Add a projection after the vector extraction

  • summary_activation (string or None, optional, defaults to None) – Argument used when doing sequence summary. Used in for the multiple choice head in OpenAIGPTDoubleHeadsModel. ‘tanh’ => add a tanh activation to the output, Other => no activation.

  • summary_proj_to_labels (boolean, optional, defaults to True) – Argument used when doing sequence summary. Used in for the multiple choice head in OpenAIGPTDoubleHeadsModel. If True, the projection outputs to config.num_labels classes (otherwise to hidden_size). Default: False.

  • summary_first_dropout (float, optional, defaults to 0.1) – Argument used when doing sequence summary. Used in for the multiple choice head in OpenAIGPTDoubleHeadsModel. Add a dropout before the projection and activation

Example:

from transformers import OpenAIGPTConfig, OpenAIGPTModel

# Initializing a GPT configuration
configuration = OpenAIGPTConfig()

# Initializing a model from the configuration
model = OpenAIGPTModel(configuration)

# Accessing the model configuration
configuration = model.config

OpenAIGPTTokenizer

class transformers.OpenAIGPTTokenizer(vocab_file, merges_file, unk_token='<unk>', **kwargs)[source]

BPE tokenizer. Peculiarities:

  • lower case all inputs

  • uses SpaCy tokenizer and ftfy for pre-BPE tokenization if they are installed, fallback to BERT’s BasicTokenizer if not.

This tokenizer inherits from PreTrainedTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • merges_file (str) – Path to the merges file.

  • unk_token (string, optional, defaults to “<unk>”) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

save_vocabulary(save_directory)[source]

Save the vocabulary and special tokens file to a directory.

Parameters

save_directory (str) – The directory in which to save the vocabulary.

Returns

Paths to the files saved.

Return type

Tuple(str)

OpenAIGPTTokenizerFast

class transformers.OpenAIGPTTokenizerFast(vocab_file, merges_file, unk_token='<unk>', **kwargs)[source]

Construct a “Fast” BPE tokenizer for OpenAI GPT (backed by HuggingFace’s tokenizers library).

Peculiarities:

  • lower case all inputs

  • uses SpaCy tokenizer and ftfy for pre-BPE tokenization if they are installed, fallback to BERT’s BasicTokenizer if not.

This tokenizer inherits from PreTrainedTokenizer which contains most of the methods. Users should refer to the superclass for more information regarding methods.

Parameters
  • vocab_file (str) – Path to the vocabulary file.

  • merges_file (str) – Path to the merges file.

  • unk_token (string, optional, defaults to “<unk>”) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.

OpenAIGPTModel

class transformers.OpenAIGPTModel(config)[source]

The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (OpenAIGPTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None)[source]

The OpenAIGPTModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.OpenAIGPTTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

Returns

last_hidden_state (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size)):

Sequence of hidden-states at the last layer of the model.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs

Examples:

from transformers import OpenAIGPTTokenizer, OpenAIGPTModel
import torch

tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = OpenAIGPTModel.from_pretrained('openai-gpt')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple
get_input_embeddings()[source]

Returns the model’s input embeddings.

Returns

A torch module mapping vocabulary to hidden states.

Return type

nn.Module

set_input_embeddings(new_embeddings)[source]

Set model’s input embeddings

Parameters

value (nn.Module) – A module mapping vocabulary to hidden states.

OpenAIGPTLMHeadModel

class transformers.OpenAIGPTLMHeadModel(config)[source]

OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (OpenAIGPTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None)[source]

The OpenAIGPTLMHeadModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.OpenAIGPTTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set labels = input_ids Indices are selected in [-100, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

Returns

loss (torch.FloatTensor of shape (1,), optional, returned when labels is provided)

Language modeling loss.

prediction_scores (torch.FloatTensor of shape (batch_size, sequence_length, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

past (List[torch.FloatTensor] of length config.n_layers with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs

Examples:

from transformers import OpenAIGPTTokenizer, OpenAIGPTLMHeadModel
import torch

tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt')
input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True)).unsqueeze(0)  # Batch size 1
outputs = model(input_ids, labels=input_ids)
loss, logits = outputs[:2]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

OpenAIGPTDoubleHeadsModel

class transformers.OpenAIGPTDoubleHeadsModel(config)[source]

OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence).

This model is a PyTorch torch.nn.Module sub-class. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

Parameters

config (OpenAIGPTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

forward(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, mc_token_ids=None, lm_labels=None, mc_labels=None)[source]

The OpenAIGPTDoubleHeadsModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (torch.LongTensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.OpenAIGPTTokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (torch.FloatTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (torch.FloatTensor of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (torch.FloatTensor of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • mc_token_ids (torch.LongTensor of shape (batch_size, num_choices), optional, default to index of the last token of the input) – Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1[.

  • lm_labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can set lm_labels = input_ids Indices are selected in [-1, 0, ..., config.vocab_size] All labels set to -100 are ignored (masked), the loss is only computed for labels in [0, ..., config.vocab_size]

  • mc_labels (torch.LongTensor of shape (batch_size), optional, defaults to None) – Labels for computing the multiple choice classification loss. Indices should be in [0, ..., num_choices] where num_choices is the size of the second dimension of the input tensors. (see input_ids above)

Returns

lm_loss (torch.FloatTensor of shape (1,), optional, returned when lm_labels is provided):

Language modeling loss.

mc_loss (torch.FloatTensor of shape (1,), optional, returned when multiple_choice_labels is provided):

Multiple choice classification loss.

lm_prediction_scores (torch.FloatTensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

mc_prediction_scores (torch.FloatTensor of shape (batch_size, num_choices)):

Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).

past (List[torch.FloatTensor] of length config.n_layers with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(torch.FloatTensor), optional, returned when config.output_hidden_states=True):

Tuple of torch.FloatTensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(torch.FloatTensor), optional, returned when config.output_attentions=True):

Tuple of torch.FloatTensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(torch.FloatTensor) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs

Examples:

from transformers import OpenAIGPTTokenizer, OpenAIGPTDoubleHeadsModel
import torch

tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = OpenAIGPTDoubleHeadsModel.from_pretrained('openai-gpt')
tokenizer.add_special_tokens({'cls_token': '[CLS]'})  # Add a [CLS] to the vocabulary (we should train it also!)
model.resize_token_embeddings(len(tokenizer))

choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0)  # Batch size 1, 2 choices
mc_token_ids = torch.tensor([input_ids.size(-1)-1, input_ids.size(-1)-1]).unsqueeze(0)  # Batch size 1

outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_prediction_scores, mc_prediction_scores = outputs[:2]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

nn.Module

TFOpenAIGPTModel

class transformers.TFOpenAIGPTModel(*args, **kwargs)[source]

The bare OpenAI GPT transformer model outputing raw hidden-states without any specific head on top.

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (OpenAIGPTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFOpenAIGPTModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.GPT2Tokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

Returns

last_hidden_state (tf.Tensor of shape (batch_size, sequence_length, hidden_size)):

Sequence of hidden-states at the last layer of the model.

hidden_states (tuple(tf.Tensor) optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs

Examples:

import tensorflow as tf
from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel

tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = TFOpenAIGPTModel.from_pretrained('openai-gpt')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]  # Batch size 1
outputs = model(input_ids)
last_hidden_states = outputs[0]  # The last hidden-state is the first element of the output tuple

TFOpenAIGPTLMHeadModel

class transformers.TFOpenAIGPTLMHeadModel(*args, **kwargs)[source]

OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (OpenAIGPTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, **kwargs)[source]

The TFOpenAIGPTLMHeadModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.GPT2Tokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

Returns

prediction_scores (tf.Tensor of shape (batch_size, sequence_length, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs

Examples:

import tensorflow as tf
from transformers import OpenAIGPTTokenizer, TFOpenAIGPTLMHeadModel

tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = TFOpenAIGPTLMHeadModel.from_pretrained('openai-gpt')
input_ids = tf.constant(tokenizer.encode("Hello, my dog is cute", add_special_tokens=True))[None, :]  # Batch size 1
outputs = model(input_ids)
logits = outputs[0]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

tf.keras.layers.Layer

TFOpenAIGPTDoubleHeadsModel

class transformers.TFOpenAIGPTDoubleHeadsModel(*args, **kwargs)[source]

OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence).

Note

TF 2.0 models accepts two formats as inputs:

  • having all inputs as keyword arguments (like PyTorch models), or

  • having all inputs as a list, tuple or dict in the first positional arguments.

This second option is useful when using tf.keras.Model.fit() method which currently requires having all the tensors in the first argument of the model call function: model(inputs).

If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :

  • a single Tensor with input_ids only and nothing else: model(inputs_ids)

  • a list of varying length with one or several input Tensors IN THE ORDER given in the docstring: model([input_ids, attention_mask]) or model([input_ids, attention_mask, token_type_ids])

  • a dictionary with one or several input Tensors associated to the input names given in the docstring: model({'input_ids': input_ids, 'token_type_ids': token_type_ids})

Parameters

config (OpenAIGPTConfig) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the from_pretrained() method to load the model weights.

call(inputs, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, mc_token_ids=None, training=False)[source]

The TFOpenAIGPTDoubleHeadsModel forward method, overrides the __call__() special method.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Parameters
  • input_ids (Numpy array or tf.Tensor of shape (batch_size, sequence_length)) –

    Indices of input sequence tokens in the vocabulary.

    Indices can be obtained using transformers.GPT2Tokenizer. See transformers.PreTrainedTokenizer.encode() and transformers.PreTrainedTokenizer.encode_plus() for details.

    What are input IDs?

  • attention_mask (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Mask to avoid performing attention on padding token indices. Mask values selected in [0, 1]: 1 for tokens that are NOT MASKED, 0 for MASKED tokens.

    What are attention masks?

  • token_type_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Segment token indices to indicate first and second portions of the inputs. Indices are selected in [0, 1]: 0 corresponds to a sentence A token, 1 corresponds to a sentence B token

    What are token type IDs?

  • position_ids (tf.Tensor or Numpy array of shape (batch_size, sequence_length), optional, defaults to None) –

    Indices of positions of each input sequence tokens in the position embeddings. Selected in the range [0, config.max_position_embeddings - 1].

    What are position IDs?

  • head_mask (tf.Tensor or Numpy array of shape (num_heads,) or (num_layers, num_heads), optional, defaults to None) – Mask to nullify selected heads of the self-attention modules. Mask values selected in [0, 1]: 1 indicates the head is not masked, 0 indicates the head is masked.

  • inputs_embeds (tf.Tensor or Numpy array of shape (batch_size, sequence_length, hidden_size), optional, defaults to None) – Optionally, instead of passing input_ids you can choose to directly pass an embedded representation. This is useful if you want more control over how to convert input_ids indices into associated vectors than the model’s internal embedding lookup matrix.

  • training (boolean, optional, defaults to False) – Whether to activate dropout modules (if set to True) during training or to de-activate them (if set to False) for evaluation.

  • mc_token_ids (tf.Tensor or Numpy array of shape (batch_size, num_choices), optional, default to index of the last token of the input) – Index of the classification token in each input sequence. Selected in the range [0, input_ids.size(-1) - 1[.

Returns

lm_prediction_scores (tf.Tensor of shape (batch_size, num_choices, sequence_length, config.vocab_size)):

Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

mc_prediction_scores (tf.Tensor of shape (batch_size, num_choices)):

Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).

past (List[tf.Tensor] of length config.n_layers with each tensor of shape (2, batch_size, num_heads, sequence_length, embed_size_per_head)):

Contains pre-computed hidden-states (key and values in the attention blocks). Can be used (see past input) to speed up sequential decoding. The token ids which have their past given to this model should not be passed as input ids as they have already been computed.

hidden_states (tuple(tf.Tensor), optional, returned when config.output_hidden_states=True):

Tuple of tf.Tensor (one for the output of the embeddings + one for the output of each layer) of shape (batch_size, sequence_length, hidden_size).

Hidden-states of the model at the output of each layer plus the initial embedding outputs.

attentions (tuple(tf.Tensor), optional, returned when config.output_attentions=True):

Tuple of tf.Tensor (one for each layer) of shape (batch_size, num_heads, sequence_length, sequence_length).

Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.

Return type

tuple(tf.Tensor) comprising various elements depending on the configuration (OpenAIGPTConfig) and inputs

Examples:

# For example purposes. Not runnable.
import tensorflow as tf
from transformers import OpenAIGPTTokenizer, TFOpenAIGPTDoubleHeadsModel

tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt')
model = TFOpenAIGPTDoubleHeadsModel.from_pretrained('openai-gpt')

# Add a [CLS] to the vocabulary (we should train it also!)
# This option is currently not implemented in TF 2.0
raise NotImplementedError
tokenizer.add_special_tokens({'cls_token': '[CLS]'})
model.resize_token_embeddings(len(tokenizer))  # Update the model embeddings with the new vocabulary size
print(tokenizer.cls_token_id, len(tokenizer))  # The newly token the last token of the vocabulary

choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"]
input_ids = tf.constant([tokenizer.encode(s) for s in choices])[None, :]  # Batch size 1, 2 choices
mc_token_ids = tf.constant([input_ids.size(-1), input_ids.size(-1)])[None, :]  # Batch size 1
outputs = model(input_ids, mc_token_ids=mc_token_ids)
lm_prediction_scores, mc_prediction_scores = outputs[:2]
get_output_embeddings()[source]

Returns the model’s output embeddings.

Returns

A torch module mapping hidden states to vocabulary.

Return type

tf.keras.layers.Layer