OpenAI GPT¶
Overview¶
OpenAI GPT model was proposed in Improving Language Understanding by Generative Pre-Training by Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever. It’s a causal (unidirectional) transformer pre-trained using language modeling on a large corpus will long range dependencies, the Toronto Book Corpus.
The abstract from the paper is the following:
Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pretraining of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied.
Tips:
GPT is a model with absolute position embeddings so it’s usually advised to pad the inputs on the right rather than the left.
GPT was trained with a causal language modeling (CLM) objective and is therefore powerful at predicting the next token in a sequence. Leveraging this feature allows GPT-2 to generate syntactically coherent text as it can be observed in the run_generation.py example script.
Write With Transformer is a webapp created and hosted by Hugging Face showcasing the generative capabilities of several models. GPT is one of them.
This model was contributed by thomwolf. The original code can be found here.
Note:
If you want to reproduce the original tokenization process of the OpenAI GPT paper, you will need to install ftfy
and SpaCy
:
pip install spacy ftfy==4.4.3
python -m spacy download en
If you don’t install ftfy
and SpaCy
, the OpenAIGPTTokenizer
will default to tokenize
using BERT’s BasicTokenizer
followed by Byte-Pair Encoding (which should be fine for most usage, don’t worry).
OpenAIGPTConfig¶
-
class
transformers.
OpenAIGPTConfig
(vocab_size=40478, n_positions=512, n_ctx=512, n_embd=768, n_layer=12, n_head=12, afn='gelu', resid_pdrop=0.1, embd_pdrop=0.1, attn_pdrop=0.1, layer_norm_epsilon=1e-05, initializer_range=0.02, predict_special_tokens=True, summary_type='cls_index', summary_use_proj=True, summary_activation=None, summary_proj_to_labels=True, summary_first_dropout=0.1, **kwargs)[source]¶ This is the configuration class to store the configuration of a
OpenAIGPTModel
or aTFOpenAIGPTModel
. It is used to instantiate a GPT model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the GPT architecture from OpenAI.Configuration objects inherit from
PretrainedConfig
and can be used to control the model outputs. Read the documentation fromPretrainedConfig
for more information.- Parameters
vocab_size (
int
, optional, defaults to 40478) – Vocabulary size of the GPT-2 model. Defines the number of different tokens that can be represented by theinputs_ids
passed when callingOpenAIGPTModel
orTFOpenAIGPTModel
.n_positions (
int
, optional, defaults to 512) – The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048).n_ctx (
int
, optional, defaults to 512) – Dimensionality of the causal mask (usually same as n_positions).n_embd (
int
, optional, defaults to 768) – Dimensionality of the embeddings and hidden states.n_layer (
int
, optional, defaults to 12) – Number of hidden layers in the Transformer encoder.n_head (
int
, optional, defaults to 12) – Number of attention heads for each attention layer in the Transformer encoder.afn (
str
orCallable
, optional, defaults to"gelu"
) – The non-linear activation function (function or string) in the encoder and pooler. If string,"gelu"
,"relu"
,"silu"
and"gelu_new"
are supported.resid_pdrop (
float
, optional, defaults to 0.1) – The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.embd_pdrop (
int
, optional, defaults to 0.1) – The dropout ratio for the embeddings.attn_pdrop (
float
, optional, defaults to 0.1) – The dropout ratio for the attention.layer_norm_epsilon (
float
, optional, defaults to 1e-5) – The epsilon to use in the layer normalization layersinitializer_range (
float
, optional, defaults to 0.02) – The standard deviation of the truncated_normal_initializer for initializing all weight matrices.predict_special_tokens (
bool
, optional, defaults toTrue
) – Whether or not special tokens should be predicted when the model has a language modeling head.summary_type (
str
, optional, defaults to"cls_index"
) –Argument used when doing sequence summary, used in the models
OpenAIGPTDoubleHeadsModel
andOpenAIGPTDoubleHeadsModel
.Has to be one of the following options:
"last"
: Take the last token hidden state (like XLNet)."first"
: Take the first token hidden state (like BERT)."mean"
: Take the mean of all tokens hidden states."cls_index"
: Supply a Tensor of classification token position (like GPT/GPT-2)."attn"
: Not implemented now, use multi-head attention.
summary_use_proj (
bool
, optional, defaults toTrue
) –Argument used when doing sequence summary, used in the models
OpenAIGPTDoubleHeadsModel
andOpenAIGPTDoubleHeadsModel
.Whether or not to add a projection after the vector extraction.
summary_activation (
str
, optional) –Argument used when doing sequence summary, used in the models
OpenAIGPTDoubleHeadsModel
andOpenAIGPTDoubleHeadsModel
.Pass
"tanh"
for a tanh activation to the output, any other value will result in no activation.summary_proj_to_labels (
bool
, optional, defaults toTrue
) –Argument used when doing sequence summary, used in the models
OpenAIGPTDoubleHeadsModel
andOpenAIGPTDoubleHeadsModel
.Whether the projection outputs should have
config.num_labels
orconfig.hidden_size
classes.summary_first_dropout (
float
, optional, defaults to 0.1) –Argument used when doing sequence summary, used in the models
OpenAIGPTDoubleHeadsModel
andOpenAIGPTDoubleHeadsModel
.The dropout ratio to be used after the projection and activation.
use_cache (
bool
, optional, defaults toTrue
) – Whether or not the model should return the last key/values attentions (not used by all models).
Examples:
>>> from transformers import OpenAIGPTConfig, OpenAIGPTModel >>> # Initializing a GPT configuration >>> configuration = OpenAIGPTConfig() >>> # Initializing a model from the configuration >>> model = OpenAIGPTModel(configuration) >>> # Accessing the model configuration >>> configuration = model.config
OpenAIGPTTokenizer¶
-
class
transformers.
OpenAIGPTTokenizer
(vocab_file, merges_file, unk_token='<unk>', **kwargs)[source]¶ Construct a GPT Tokenizer. Based on Byte-Pair-Encoding with the following peculiarities:
lowercases all inputs,
uses
SpaCy
tokenizer andftfy
for pre-BPE tokenization if they are installed, fallback to BERT’sBasicTokenizer
if not.
This tokenizer inherits from
PreTrainedTokenizer
which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
vocab_file (
str
) – Path to the vocabulary file.merges_file (
str
) – Path to the merges file.unk_token (
str
, optional, defaults to"<unk>"
) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
-
save_vocabulary
(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]¶ Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use
_save_pretrained()
to save the whole state of the tokenizer.- Parameters
save_directory (
str
) – The directory in which to save the vocabulary.filename_prefix (
str
, optional) – An optional prefix to add to the named of the saved files.
- Returns
Paths to the files saved.
- Return type
Tuple(str)
OpenAIGPTTokenizerFast¶
-
class
transformers.
OpenAIGPTTokenizerFast
(vocab_file, merges_file, tokenizer_file=None, unk_token='<unk>', **kwargs)[source]¶ Construct a “fast” GPT Tokenizer (backed by HuggingFace’s tokenizers library). Based on Byte-Pair-Encoding with the following peculiarities:
lower case all inputs
uses BERT’s BasicTokenizer for pre-BPE tokenization
This tokenizer inherits from
PreTrainedTokenizerFast
which contains most of the main methods. Users should refer to this superclass for more information regarding those methods.- Parameters
vocab_file (
str
) – Path to the vocabulary file.merges_file (
str
) – Path to the merges file.unk_token (
str
, optional, defaults to"<unk>"
) – The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this token instead.
-
save_vocabulary
(save_directory: str, filename_prefix: Optional[str] = None) → Tuple[str][source]¶ Save only the vocabulary of the tokenizer (vocabulary + added tokens).
This method won’t save the configuration and special token mappings of the tokenizer. Use
_save_pretrained()
to save the whole state of the tokenizer.- Parameters
save_directory (
str
) – The directory in which to save the vocabulary.filename_prefix (
str
, optional) – An optional prefix to add to the named of the saved files.
- Returns
Paths to the files saved.
- Return type
Tuple(str)
-
slow_tokenizer_class
¶ alias of
transformers.models.openai.tokenization_openai.OpenAIGPTTokenizer
OpenAI specific outputs¶
-
class
transformers.models.openai.modeling_openai.
OpenAIGPTDoubleHeadsModelOutput
(loss: Optional[torch.FloatTensor] = None, mc_loss: Optional[torch.FloatTensor] = None, logits: torch.FloatTensor = None, mc_logits: torch.FloatTensor = None, hidden_states: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[torch.FloatTensor]] = None)[source]¶ Base class for outputs of models predicting if two sentences are consecutive or not.
- Parameters
loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) – Language modeling loss.mc_loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenmc_labels
is provided) – Multiple choice classification loss.logits (
torch.FloatTensor
of shape(batch_size, num_choices, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).mc_logits (
torch.FloatTensor
of shape(batch_size, num_choices)
) – Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) –Tuple of
torch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) –Tuple of
torch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
-
class
transformers.models.openai.modeling_tf_openai.
TFOpenAIGPTDoubleHeadsModelOutput
(logits: tensorflow.python.framework.ops.Tensor = None, mc_logits: tensorflow.python.framework.ops.Tensor = None, hidden_states: Optional[Tuple[tensorflow.python.framework.ops.Tensor]] = None, attentions: Optional[Tuple[tensorflow.python.framework.ops.Tensor]] = None)[source]¶ Base class for outputs of models predicting if two sentences are consecutive or not.
- Parameters
logits (
tf.Tensor
of shape(batch_size, num_choices, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).mc_logits (
tf.Tensor
of shape(batch_size, num_choices)
) – Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) –Tuple of
tf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) –Tuple of
tf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
OpenAIGPTModel¶
-
class
transformers.
OpenAIGPTModel
(config)[source]¶ The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶ The
OpenAIGPTModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple.
- Returns
A
BaseModelOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftorch.FloatTensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.last_hidden_state (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
) – Sequence of hidden-states at the output of the last layer of the model.hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
BaseModelOutput
ortuple(torch.FloatTensor)
Example:
>>> from transformers import OpenAIGPTTokenizer, OpenAIGPTModel >>> import torch >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = OpenAIGPTModel.from_pretrained('openai-gpt') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> last_hidden_states = outputs.last_hidden_state
OpenAIGPTLMHeadModel¶
-
class
transformers.
OpenAIGPTLMHeadModel
(config)[source]¶ OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶ The
OpenAIGPTLMHeadModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can setlabels = input_ids
Indices are selected in[-100, 0, ..., config.vocab_size]
All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]
- Returns
A
CausalLMOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftorch.FloatTensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) – Language modeling loss (for next-token prediction).logits (
torch.FloatTensor
of shape(batch_size, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
CausalLMOutput
ortuple(torch.FloatTensor)
Example:
>>> import torch >>> from transformers import OpenAIGPTTokenizer, OpenAIGPTLMHeadModel >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = OpenAIGPTLMHeadModel.from_pretrained('openai-gpt') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs, labels=inputs["input_ids"]) >>> loss = outputs.loss >>> logits = outputs.logits
OpenAIGPTDoubleHeadsModel¶
-
class
transformers.
OpenAIGPTDoubleHeadsModel
(config)[source]¶ OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence).
This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters:
- config (
OpenAIGPTConfig
): Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out the
from_pretrained()
method to load the model weights.
- config (
-
forward
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, mc_token_ids=None, labels=None, mc_labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶ The
OpenAIGPTDoubleHeadsModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple.mc_token_ids (
torch.LongTensor
of shape(batch_size, num_choices)
, optional, default to index of the last token of the input) – Index of the classification token in each input sequence. Selected in the range[0, input_ids.size(-1) - 1]
.labels (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – Labels for language modeling. Note that the labels are shifted inside the model, i.e. you can setlabels = input_ids
Indices are selected in[-1, 0, ..., config.vocab_size]
All labels set to-100
are ignored (masked), the loss is only computed for labels in[0, ..., config.vocab_size]
mc_labels (
torch.LongTensor
of shape(batch_size)
, optional) – Labels for computing the multiple choice classification loss. Indices should be in[0, ..., num_choices]
where num_choices is the size of the second dimension of the input tensors. (see input_ids above)
- Returns
A
OpenAIGPTDoubleHeadsModelOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftorch.FloatTensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) – Language modeling loss.mc_loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenmc_labels
is provided) – Multiple choice classification loss.logits (
torch.FloatTensor
of shape(batch_size, num_choices, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).mc_logits (
torch.FloatTensor
of shape(batch_size, num_choices)
) – Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples:
>>> from transformers import OpenAIGPTTokenizer, OpenAIGPTDoubleHeadsModel >>> import torch >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = OpenAIGPTDoubleHeadsModel.from_pretrained('openai-gpt') >>> tokenizer.add_special_tokens({'cls_token': '[CLS]'}) # Add a [CLS] to the vocabulary (we should train it also!) >>> model.resize_token_embeddings(len(tokenizer)) >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] >>> input_ids = torch.tensor([tokenizer.encode(s) for s in choices]).unsqueeze(0) # Batch size 1, 2 choices >>> mc_token_ids = torch.tensor([input_ids.size(-1)-1, input_ids.size(-1)-1]).unsqueeze(0) # Batch size 1 >>> outputs = model(input_ids, mc_token_ids=mc_token_ids) >>> lm_logits = outputs.lm_logits >>> mc_logits = outputs.mc_logits
- Return type
OpenAIGPTDoubleHeadsModelOutput
ortuple(torch.FloatTensor)
OpenAIGPTForSequenceClassification¶
-
class
transformers.
OpenAIGPTForSequenceClassification
(config)[source]¶ The Original OpenAI GPT Model transformer with a sequence classification head on top (linear layer).
OpenAIGPTForSequenceClassification
uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do. Since it does classification on the last token, it requires to know the position of the last token. If apad_token_id
is defined in the configuration, it finds the last token that is not a padding token in each row. If nopad_token_id
is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens wheninputs_embeds
are passed instead ofinput_ids
, it does the same (take the last value in each row of the batch).This model inherits from
PreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
forward
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_attentions=None, output_hidden_states=None, return_dict=None)[source]¶ The
OpenAIGPTForSequenceClassification
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.attention_mask (
torch.FloatTensor
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
torch.FloatTensor
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
torch.FloatTensor
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple.labels (
torch.LongTensor
of shape(batch_size,)
, optional) – Labels for computing the sequence classification/regression loss. Indices should be in[0, ..., config.num_labels - 1]
. Ifconfig.num_labels == 1
a regression loss is computed (Mean-Square loss), Ifconfig.num_labels > 1
a classification loss is computed (Cross-Entropy).
- Returns
A
SequenceClassifierOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftorch.FloatTensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.loss (
torch.FloatTensor
of shape(1,)
, optional, returned whenlabels
is provided) – Classification (or regression if config.num_labels==1) loss.logits (
torch.FloatTensor
of shape(batch_size, config.num_labels)
) – Classification (or regression if config.num_labels==1) scores (before SoftMax).hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftorch.FloatTensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
SequenceClassifierOutput
ortuple(torch.FloatTensor)
Example:
>>> from transformers import OpenAIGPTTokenizer, OpenAIGPTForSequenceClassification >>> import torch >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = OpenAIGPTForSequenceClassification.from_pretrained('openai-gpt') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") >>> labels = torch.tensor([1]).unsqueeze(0) # Batch size 1 >>> outputs = model(**inputs, labels=labels) >>> loss = outputs.loss >>> logits = outputs.logits
TFOpenAIGPTModel¶
-
class
transformers.
TFOpenAIGPTModel
(*args, **kwargs)[source]¶ The bare OpenAI GPT transformer model outputting raw hidden-states without any specific head on top.
This model inherits from
TFPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()
method which currently requires having all the tensors in the first argument of the model call function:model(inputs)
.If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with
input_ids
only and nothing else:model(inputs_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])
ormodel([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
call
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs)[source]¶ The
TFOpenAIGPTModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy array
ortf.Tensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.__call__()
andtransformers.PreTrainedTokenizer.encode()
for details.attention_mask (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
tf.Tensor
orNumpy array
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.training (
bool
, optional, defaults toFalse
) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).
- Returns
A
TFBaseModelOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftf.Tensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.last_hidden_state (
tf.Tensor
of shape(batch_size, sequence_length, hidden_size)
) – Sequence of hidden-states at the output of the last layer of the model.hidden_states (
tuple(tf.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
TFBaseModelOutput
ortuple(tf.Tensor)
Example:
>>> from transformers import OpenAIGPTTokenizer, TFOpenAIGPTModel >>> import tensorflow as tf >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = TFOpenAIGPTModel.from_pretrained('openai-gpt') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> last_hidden_states = outputs.last_hidden_state
TFOpenAIGPTLMHeadModel¶
-
class
transformers.
TFOpenAIGPTLMHeadModel
(*args, **kwargs)[source]¶ OpenAI GPT Model transformer with a language modeling head on top (linear layer with weights tied to the input embeddings).
This model inherits from
TFPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()
method which currently requires having all the tensors in the first argument of the model call function:model(inputs)
.If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with
input_ids
only and nothing else:model(inputs_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])
ormodel([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
call
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs)[source]¶ The
TFOpenAIGPTLMHeadModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy array
ortf.Tensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.__call__()
andtransformers.PreTrainedTokenizer.encode()
for details.attention_mask (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
tf.Tensor
orNumpy array
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.training (
bool
, optional, defaults toFalse
) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).labels (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) – Labels for computing the cross entropy classification loss. Indices should be in[0, ..., config.vocab_size - 1]
.
- Returns
A
TFCausalLMOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftf.Tensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.loss (
tf.Tensor
of shape(n,)
, optional, where n is the number of non-masked labels, returned whenlabels
is provided) – Language modeling loss (for next-token prediction).logits (
tf.Tensor
of shape(batch_size, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
TFCausalLMOutput
ortuple(tf.Tensor)
Example:
>>> from transformers import OpenAIGPTTokenizer, TFOpenAIGPTLMHeadModel >>> import tensorflow as tf >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = TFOpenAIGPTLMHeadModel.from_pretrained('openai-gpt') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> outputs = model(inputs) >>> logits = outputs.logits
TFOpenAIGPTDoubleHeadsModel¶
-
class
transformers.
TFOpenAIGPTDoubleHeadsModel
(*args, **kwargs)[source]¶ OpenAI GPT Model transformer with a language modeling and a multiple-choice classification head on top e.g. for RocStories/SWAG tasks. The two heads are two linear layers. The language modeling head has its weights tied to the input embeddings, the classification head takes as input the input of a specified classification token index in the input sequence).
This model inherits from
TFPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()
method which currently requires having all the tensors in the first argument of the model call function:model(inputs)
.If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with
input_ids
only and nothing else:model(inputs_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])
ormodel([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
call
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, mc_token_ids=None, output_attentions=None, output_hidden_states=None, return_dict=None, training=False, **kwargs)[source]¶ The
TFOpenAIGPTDoubleHeadsModel
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy array
ortf.Tensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.__call__()
andtransformers.PreTrainedTokenizer.encode()
for details.attention_mask (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
tf.Tensor
orNumpy array
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.training (
bool
, optional, defaults toFalse
) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).mc_token_ids (
tf.Tensor
orNumpy array
of shape(batch_size, num_choices)
, optional, default to index of the last token of the input) – Index of the classification token in each input sequence. Selected in the range[0, input_ids.size(-1) - 1]
.
- Returns
A
TFOpenAIGPTDoubleHeadsModelOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftf.Tensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.logits (
tf.Tensor
of shape(batch_size, num_choices, sequence_length, config.vocab_size)
) – Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).mc_logits (
tf.Tensor
of shape(batch_size, num_choices)
) – Prediction scores of the multiple choice classification head (scores for each choice before SoftMax).hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
Examples:
>>> import tensorflow as tf >>> from transformers import OpenAIGPTTokenizer, TFOpenAIGPTDoubleHeadsModel >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = TFOpenAIGPTDoubleHeadsModel.from_pretrained('openai-gpt') >>> # Add a [CLS] to the vocabulary (we should train it also!) >>> tokenizer.add_special_tokens({'cls_token': '[CLS]'}) >>> model.resize_token_embeddings(len(tokenizer)) # Update the model embeddings with the new vocabulary size >>> print(tokenizer.cls_token_id, len(tokenizer)) # The newly token the last token of the vocabulary >>> choices = ["Hello, my dog is cute [CLS]", "Hello, my cat is cute [CLS]"] >>> encoding = tokenizer(choices, return_tensors="tf") >>> inputs = {k: tf.expand_dims(v, 0) for k, v in encoding.items()} >>> inputs["mc_token_ids"]= tf.constant([inputs["input_ids"].shape[-1] - 1, inputs["input_ids"].shape[-1] - 1])[None, :] # Batch size 1 >>> outputs = model(inputs) >>> lm_prediction_scores, mc_prediction_scores = outputs[:2]
- Return type
TFOpenAIGPTDoubleHeadsModelOutput
ortuple(tf.Tensor)
TFOpenAIGPTForSequenceClassification¶
-
class
transformers.
TFOpenAIGPTForSequenceClassification
(*args, **kwargs)[source]¶ The OpenAI GPT Model transformer with a sequence classification head on top (linear layer).
TFOpenAIGPTForSequenceClassification
uses the last token in order to do the classification, as other causal models (e.g. GPT-2) do.Since it does classification on the last token, it requires to know the position of the last token. If a
pad_token_id
is defined in the configuration, it finds the last token that is not a padding token in each row. If nopad_token_id
is defined, it simply takes the last value in each row of the batch. Since it cannot guess the padding tokens wheninputs_embeds
are passed instead ofinput_ids
, it does the same (take the last value in each row of the batch).This model inherits from
TFPreTrainedModel
. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)This model is also a tf.keras.Model subclass. Use it as a regular TF 2.0 Keras Model and refer to the TF 2.0 documentation for all matter related to general usage and behavior.
Note
TF 2.0 models accepts two formats as inputs:
having all inputs as keyword arguments (like PyTorch models), or
having all inputs as a list, tuple or dict in the first positional arguments.
This second option is useful when using
tf.keras.Model.fit()
method which currently requires having all the tensors in the first argument of the model call function:model(inputs)
.If you choose this second option, there are three possibilities you can use to gather all the input Tensors in the first positional argument :
a single Tensor with
input_ids
only and nothing else:model(inputs_ids)
a list of varying length with one or several input Tensors IN THE ORDER given in the docstring:
model([input_ids, attention_mask])
ormodel([input_ids, attention_mask, token_type_ids])
a dictionary with one or several input Tensors associated to the input names given in the docstring:
model({"input_ids": input_ids, "token_type_ids": token_type_ids})
- Parameters
config (
OpenAIGPTConfig
) – Model configuration class with all the parameters of the model. Initializing with a config file does not load the weights associated with the model, only the configuration. Check out thefrom_pretrained()
method to load the model weights.
-
call
(input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, labels=None, training=False, **kwargs)[source]¶ The
TFOpenAIGPTForSequenceClassification
forward method, overrides the__call__()
special method.Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.- Parameters
input_ids (
Numpy array
ortf.Tensor
of shape(batch_size, sequence_length)
) –Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
OpenAIGPTTokenizer
. Seetransformers.PreTrainedTokenizer.__call__()
andtransformers.PreTrainedTokenizer.encode()
for details.attention_mask (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values selected in
[0, 1]
:1 for tokens that are not masked,
0 for tokens that are masked.
token_type_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Segment token indices to indicate first and second portions of the inputs. Indices are selected in
[0, 1]
:0 corresponds to a sentence A token,
1 corresponds to a sentence B token.
position_ids (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length)
, optional) –Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
[0, config.max_position_embeddings - 1]
.head_mask (
tf.Tensor
orNumpy array
of shape(num_heads,)
or(num_layers, num_heads)
, optional) –Mask to nullify selected heads of the self-attention modules. Mask values selected in
[0, 1]
:1 indicates the head is not masked,
0 indicates the head is masked.
inputs_embeds (
tf.Tensor
orNumpy array
of shape(batch_size, sequence_length, hidden_size)
, optional) – Optionally, instead of passinginput_ids
you can choose to directly pass an embedded representation. This is useful if you want more control over how to convertinput_ids
indices into associated vectors than the model’s internal embedding lookup matrix.output_attentions (
bool
, optional) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.output_hidden_states (
bool
, optional) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more detail. This argument can be used only in eager mode, in graph mode the value in the config will be used instead.return_dict (
bool
, optional) – Whether or not to return aModelOutput
instead of a plain tuple. This argument can be used in eager mode, in graph mode the value will always be set to True.training (
bool
, optional, defaults toFalse
) – Whether or not to use the model in training mode (some modules like dropout modules have different behaviors between training and evaluation).labels (
tf.Tensor
of shape(batch_size, sequence_length)
, optional) – Labels for computing the cross entropy classification loss. Indices should be in[0, ..., config.vocab_size - 1]
.
- Returns
A
TFSequenceClassifierOutput
(ifreturn_dict=True
is passed or whenconfig.return_dict=True
) or a tuple oftf.Tensor
comprising various elements depending on the configuration (OpenAIGPTConfig
) and inputs.loss (
tf.Tensor
of shape(batch_size, )
, optional, returned whenlabels
is provided) – Classification (or regression if config.num_labels==1) loss.logits (
tf.Tensor
of shape(batch_size, config.num_labels)
) – Classification (or regression if config.num_labels==1) scores (before SoftMax).hidden_states (
tuple(tf.Tensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) – Tuple oftf.Tensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.Hidden-states of the model at the output of each layer plus the initial embedding outputs.
attentions (
tuple(tf.Tensor)
, optional, returned whenoutput_attentions=True
is passed or whenconfig.output_attentions=True
) – Tuple oftf.Tensor
(one for each layer) of shape(batch_size, num_heads, sequence_length, sequence_length)
.Attentions weights after the attention softmax, used to compute the weighted average in the self-attention heads.
- Return type
TFSequenceClassifierOutput
ortuple(tf.Tensor)
Example:
>>> from transformers import OpenAIGPTTokenizer, TFOpenAIGPTForSequenceClassification >>> import tensorflow as tf >>> tokenizer = OpenAIGPTTokenizer.from_pretrained('openai-gpt') >>> model = TFOpenAIGPTForSequenceClassification.from_pretrained('openai-gpt') >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="tf") >>> inputs["labels"] = tf.reshape(tf.constant(1), (-1, 1)) # Batch size 1 >>> outputs = model(inputs) >>> loss = outputs.loss >>> logits = outputs.logits