Custom Layers and Utilities
This page lists all the custom layers used by the library, as well as the utility functions it provides for modeling.
Most of those are only useful if you are studying the code of the models in the library.
Pytorch custom modules
class transformers.Conv1D
< source >( nf nx )
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
class transformers.modeling_utils.PoolerStartLogits
< source >( config: PretrainedConfig )
Parameters
-
config (PretrainedConfig) —
The config used by the model, will be used to grab the
hidden_size
of the model.
Compute SQuAD start logits from sequence hidden states.
forward
< source >(
hidden_states: FloatTensor
p_mask: typing.Optional[torch.FloatTensor] = None
)
→
torch.FloatTensor
Parameters
- hidden_states (
torch.FloatTensor
of shape(batch_size, seq_len, hidden_size)
) — The final hidden states of the model. -
p_mask (
torch.FloatTensor
of shape(batch_size, seq_len)
, optional) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked.
Returns
torch.FloatTensor
The start logits for SQuAD.
class transformers.modeling_utils.PoolerEndLogits
< source >( config: PretrainedConfig )
Parameters
-
config (PretrainedConfig) —
The config used by the model, will be used to grab the
hidden_size
of the model and thelayer_norm_eps
to use.
Compute SQuAD end logits from sequence hidden states.
forward
< source >(
hidden_states: FloatTensor
start_states: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
p_mask: typing.Optional[torch.FloatTensor] = None
)
→
torch.FloatTensor
Parameters
- hidden_states (
torch.FloatTensor
of shape(batch_size, seq_len, hidden_size)
) — The final hidden states of the model. -
start_states (
torch.FloatTensor
of shape(batch_size, seq_len, hidden_size)
, optional) — The hidden states of the first tokens for the labeled span. -
start_positions (
torch.LongTensor
of shape(batch_size,)
, optional) — The position of the first token for the labeled span. -
p_mask (
torch.FloatTensor
of shape(batch_size, seq_len)
, optional) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked.
Returns
torch.FloatTensor
The end logits for SQuAD.
One of start_states
or start_positions
should be not None
. If both are set, start_positions
overrides
start_states
.
class transformers.modeling_utils.PoolerAnswerClass
< source >( config )
Parameters
-
config (PretrainedConfig) —
The config used by the model, will be used to grab the
hidden_size
of the model.
Compute SQuAD 2.0 answer class from classification and start tokens hidden states.
forward
< source >(
hidden_states: FloatTensor
start_states: typing.Optional[torch.FloatTensor] = None
start_positions: typing.Optional[torch.LongTensor] = None
cls_index: typing.Optional[torch.LongTensor] = None
)
→
torch.FloatTensor
Parameters
- hidden_states (
torch.FloatTensor
of shape(batch_size, seq_len, hidden_size)
) — The final hidden states of the model. -
start_states (
torch.FloatTensor
of shape(batch_size, seq_len, hidden_size)
, optional) — The hidden states of the first tokens for the labeled span. -
start_positions (
torch.LongTensor
of shape(batch_size,)
, optional) — The position of the first token for the labeled span. -
cls_index (
torch.LongTensor
of shape(batch_size,)
, optional) — Position of the CLS token for each sentence in the batch. IfNone
, takes the last token.
Returns
torch.FloatTensor
The SQuAD 2.0 answer class.
One of start_states
or start_positions
should be not None
. If both are set, start_positions
overrides
start_states
.
class transformers.modeling_utils.SquadHeadOutput
< source >( loss: typing.Optional[torch.FloatTensor] = None start_top_log_probs: typing.Optional[torch.FloatTensor] = None start_top_index: typing.Optional[torch.LongTensor] = None end_top_log_probs: typing.Optional[torch.FloatTensor] = None end_top_index: typing.Optional[torch.LongTensor] = None cls_logits: typing.Optional[torch.FloatTensor] = None )
Parameters
-
loss (
torch.FloatTensor
of shape(1,)
, optional, returned if bothstart_positions
andend_positions
are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses. -
start_top_log_probs (
torch.FloatTensor
of shape(batch_size, config.start_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search). -
start_top_index (
torch.LongTensor
of shape(batch_size, config.start_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search). -
end_top_log_probs (
torch.FloatTensor
of shape(batch_size, config.start_n_top * config.end_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Log probabilities for the topconfig.start_n_top * config.end_n_top
end token possibilities (beam-search). -
end_top_index (
torch.LongTensor
of shape(batch_size, config.start_n_top * config.end_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Indices for the topconfig.start_n_top * config.end_n_top
end token possibilities (beam-search). -
cls_logits (
torch.FloatTensor
of shape(batch_size,)
, optional, returned ifstart_positions
orend_positions
is not provided) — Log probabilities for theis_impossible
label of the answers.
Base class for outputs of question answering models using a SQuADHead.
class transformers.modeling_utils.SQuADHead
< source >( config )
Parameters
-
config (PretrainedConfig) —
The config used by the model, will be used to grab the
hidden_size
of the model and thelayer_norm_eps
to use.
A SQuAD head inspired by XLNet.
forward
< source >(
hidden_states: FloatTensor
start_positions: typing.Optional[torch.LongTensor] = None
end_positions: typing.Optional[torch.LongTensor] = None
cls_index: typing.Optional[torch.LongTensor] = None
is_impossible: typing.Optional[torch.LongTensor] = None
p_mask: typing.Optional[torch.FloatTensor] = None
return_dict: bool = False
)
→
transformers.modeling_utils.SquadHeadOutput or tuple(torch.FloatTensor)
Parameters
- hidden_states (
torch.FloatTensor
of shape(batch_size, seq_len, hidden_size)
) — Final hidden states of the model on the sequence tokens. -
start_positions (
torch.LongTensor
of shape(batch_size,)
, optional) — Positions of the first token for the labeled span. -
end_positions (
torch.LongTensor
of shape(batch_size,)
, optional) — Positions of the last token for the labeled span. -
cls_index (
torch.LongTensor
of shape(batch_size,)
, optional) — Position of the CLS token for each sentence in the batch. IfNone
, takes the last token. -
is_impossible (
torch.LongTensor
of shape(batch_size,)
, optional) — Whether the question has a possible answer in the paragraph or not. -
p_mask (
torch.FloatTensor
of shape(batch_size, seq_len)
, optional) — Mask for tokens at invalid position, such as query and special symbols (PAD, SEP, CLS). 1.0 means token should be masked. -
return_dict (
bool
, optional, defaults toFalse
) — Whether or not to return a ModelOutput instead of a plain tuple.
Returns
transformers.modeling_utils.SquadHeadOutput or tuple(torch.FloatTensor)
A transformers.modeling_utils.SquadHeadOutput or a tuple of
torch.FloatTensor
(if return_dict=False
is passed or when config.return_dict=False
) comprising various
elements depending on the configuration (<class 'transformers.configuration_utils.PretrainedConfig'>
) and inputs.
- loss (
torch.FloatTensor
of shape(1,)
, optional, returned if bothstart_positions
andend_positions
are provided) — Classification loss as the sum of start token, end token (and is_impossible if provided) classification losses. - start_top_log_probs (
torch.FloatTensor
of shape(batch_size, config.start_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Log probabilities for the top config.start_n_top start token possibilities (beam-search). - start_top_index (
torch.LongTensor
of shape(batch_size, config.start_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Indices for the top config.start_n_top start token possibilities (beam-search). - end_top_log_probs (
torch.FloatTensor
of shape(batch_size, config.start_n_top * config.end_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Log probabilities for the topconfig.start_n_top * config.end_n_top
end token possibilities (beam-search). - end_top_index (
torch.LongTensor
of shape(batch_size, config.start_n_top * config.end_n_top)
, optional, returned ifstart_positions
orend_positions
is not provided) — Indices for the topconfig.start_n_top * config.end_n_top
end token possibilities (beam-search). - cls_logits (
torch.FloatTensor
of shape(batch_size,)
, optional, returned ifstart_positions
orend_positions
is not provided) — Log probabilities for theis_impossible
label of the answers.
class transformers.modeling_utils.SequenceSummary
< source >( config: PretrainedConfig )
Parameters
-
config (PretrainedConfig) —
The config used by the model. Relevant arguments in the config class of the model are (refer to the actual
config class of your model for the default values it uses):
-
summary_type (
str
) — The method to use to make this summary. Accepted values are:"last"
— Take the last token hidden state (like XLNet)"first"
— Take the first token hidden state (like Bert)"mean"
— Take the mean of all tokens hidden states"cls_index"
— Supply a Tensor of classification token position (GPT/GPT-2)"attn"
— Not implemented now, use multi-head attention
-
summary_use_proj (
bool
) — Add a projection after the vector extraction. -
summary_proj_to_labels (
bool
) — IfTrue
, the projection outputs toconfig.num_labels
classes (otherwise toconfig.hidden_size
). -
summary_activation (
Optional[str]
) — Set to"tanh"
to add a tanh activation to the output, another string orNone
will add no activation. -
summary_first_dropout (
float
) — Optional dropout probability before the projection and activation. -
summary_last_dropout (
float
)— Optional dropout probability after the projection and activation.
-
Compute a single vector summary of a sequence hidden states.
forward
< source >(
hidden_states: FloatTensor
cls_index: typing.Optional[torch.LongTensor] = None
)
→
torch.FloatTensor
Parameters
- hidden_states (
torch.FloatTensor
of shape[batch_size, seq_len, hidden_size]
) — The hidden states of the last layer. -
cls_index (
torch.LongTensor
of shape[batch_size]
or[batch_size, ...]
where … are optional leading dimensions ofhidden_states
, optional) — Used ifsummary_type == "cls_index"
and takes the last token of the sequence as classification token.
Returns
torch.FloatTensor
The summary of the sequence hidden states.
Compute a single vector summary of a sequence hidden states.
PyTorch Helper Functions
transformers.apply_chunking_to_forward
< source >(
forward_fn: typing.Callable[..., torch.Tensor]
chunk_size: int
chunk_dim: int
*input_tensors
)
→
torch.Tensor
Parameters
-
forward_fn (
Callable[..., torch.Tensor]
) — The forward function of the model. -
chunk_size (
int
) — The chunk size of a chunked tensor:num_chunks = len(input_tensors[0]) / chunk_size
. -
chunk_dim (
int
) — The dimension over which theinput_tensors
should be chunked. -
input_tensors (
Tuple[torch.Tensor]
) — The input tensors offorward_fn
which will be chunked
Returns
torch.Tensor
A tensor with the same shape as the forward_fn
would have given if applied`.
This function chunks the input_tensors
into smaller input tensor parts of size chunk_size
over the dimension
chunk_dim
. It then applies a layer forward_fn
to each chunk independently to save memory.
If the forward_fn
is independent across the chunk_dim
this function will yield the same result as directly
applying forward_fn
to input_tensors
.
Examples:
# rename the usual forward() fn to forward_chunk()
def forward_chunk(self, hidden_states):
hidden_states = self.decoder(hidden_states)
return hidden_states
# implement a chunked forward function
def forward(self, hidden_states):
return apply_chunking_to_forward(self.forward_chunk, self.chunk_size_lm_head, self.seq_len_dim, hidden_states)
transformers.pytorch_utils.find_pruneable_heads_and_indices
< source >(
heads: typing.List[int]
n_heads: int
head_size: int
already_pruned_heads: typing.Set[int]
)
→
Tuple[Set[int], torch.LongTensor]
Parameters
-
heads (
List[int]
) — List of the indices of heads to prune. -
n_heads (
int
) — The number of heads in the model. -
head_size (
int
) — The size of each head. -
already_pruned_heads (
Set[int]
) — A set of already pruned heads.
Returns
Tuple[Set[int], torch.LongTensor]
A tuple with the remaining heads and their corresponding indices.
Finds the heads and their indices taking already_pruned_heads
into account.
transformers.prune_layer
< source >(
layer: typing.Union[torch.nn.modules.linear.Linear, transformers.pytorch_utils.Conv1D]
index: LongTensor
dim: typing.Optional[int] = None
)
→
torch.nn.Linear
or Conv1D
Parameters
-
layer (
Union[torch.nn.Linear, Conv1D]
) — The layer to prune. -
index (
torch.LongTensor
) — The indices to keep in the layer. -
dim (
int
, optional) — The dimension on which to keep the indices.
Returns
torch.nn.Linear
or Conv1D
The pruned layer as a new layer with requires_grad=True
.
Prune a Conv1D or linear layer to keep only entries in index.
Used to remove heads.
transformers.pytorch_utils.prune_conv1d_layer
< source >( layer: Conv1D index: LongTensor dim: int = 1 ) → Conv1D
Prune a Conv1D layer to keep only entries in index. A Conv1D work as a Linear layer (see e.g. BERT) but the weights are transposed.
Used to remove heads.
transformers.pytorch_utils.prune_linear_layer
< source >(
layer: Linear
index: LongTensor
dim: int = 0
)
→
torch.nn.Linear
Prune a linear layer to keep only entries in index.
Used to remove heads.
TensorFlow custom layers
class transformers.modeling_tf_utils.TFConv1D
< source >( *args **kwargs )
1D-convolutional layer as defined by Radford et al. for OpenAI GPT (and also used in GPT-2).
Basically works like a linear layer but the weights are transposed.
( *args **kwargs )
Parameters
-
vocab_size (
int
) — The size of the vocabulary, e.g., the number of unique tokens. - hidden_size (
int
) — The size of the embedding vectors. -
initializer_range (
float
, optional) — The standard deviation to use when initializing the weights. If no value is provided, it will default to {@html "1/hidden_size"}. kwargs — Additional keyword arguments passed along to the__init__
oftf.keras.layers.Layer
.
Construct shared token embeddings.
The weights of the embedding layer is usually shared with the weights of the linear decoder when doing language modeling.
(
inputs: Tensor
mode: str = 'embedding'
)
→
tf.Tensor
Parameters
-
inputs (
tf.Tensor
) — In embedding mode, should be an int64 tensor with shape[batch_size, length]
.In linear mode, should be a float tensor with shape
[batch_size, length, hidden_size]
. -
mode (
str
, defaults to"embedding"
) — A valid value is either"embedding"
or"linear"
, the first one indicates that the layer should be used as an embedding layer, the second one that the layer should be used as a linear decoder.
In embedding mode, the output is a float32 embedding tensor, with shape [batch_size, length, embedding_size]
.
In linear mode, the output is a float32 with shape [batch_size, length, vocab_size]
.
ValueError
— ifmode
is not valid.
Get token embeddings of inputs or decode final hidden state.
Shared weights logic is adapted from here.
class transformers.TFSequenceSummary
< source >( *args **kwargs )
Parameters
-
config (PretrainedConfig) —
The config used by the model. Relevant arguments in the config class of the model are (refer to the actual
config class of your model for the default values it uses):
-
summary_type (
str
) — The method to use to make this summary. Accepted values are:"last"
— Take the last token hidden state (like XLNet)"first"
— Take the first token hidden state (like Bert)"mean"
— Take the mean of all tokens hidden states"cls_index"
— Supply a Tensor of classification token position (GPT/GPT-2)"attn"
— Not implemented now, use multi-head attention
-
summary_use_proj (
bool
) — Add a projection after the vector extraction. -
summary_proj_to_labels (
bool
) — IfTrue
, the projection outputs toconfig.num_labels
classes (otherwise toconfig.hidden_size
). -
summary_activation (
Optional[str]
) — Set to"tanh"
to add a tanh activation to the output, another string orNone
will add no activation. -
summary_first_dropout (
float
) — Optional dropout probability before the projection and activation. -
summary_last_dropout (
float
)— Optional dropout probability after the projection and activation.
-
-
initializer_range (
float
, defaults to 0.02) — The standard deviation to use to initialize the weights. kwargs — Additional keyword arguments passed along to the__init__
oftf.keras.layers.Layer
.
Compute a single vector summary of a sequence hidden states.
TensorFlow loss functions
Loss function suitable for causal language modeling (CLM), that is, the task of guessing the next token.
Any label of -100 will be ignored (along with the corresponding logits) in the loss computation.
Loss function suitable for masked language modeling (MLM), that is, the task of guessing the masked tokens.
Any label of -100 will be ignored (along with the corresponding logits) in the loss computation.
Loss function suitable for multiple choice tasks.
Loss function suitable for question answering.
Loss function suitable for sequence classification.
Loss function suitable for token classification.
Any label of -100 will be ignored (along with the corresponding logits) in the loss computation.
TensorFlow Helper Functions
transformers.modeling_tf_utils.get_initializer
< source >(
initializer_range: float = 0.02
)
→
tf.initializers.TruncatedNormal
Creates a tf.initializers.TruncatedNormal
with the given range.
transformers.modeling_tf_utils.keras_serializable
< source >( )
Decorate a Keras Layer class to support Keras serialization.
This is done by:
- Adding a
transformers_config
dict to the Keras config dictionary inget_config
(called by Keras at serialization time. - Wrapping
__init__
to accept thattransformers_config
dict (passed by Keras at deserialization time) and convert it to a config object for the actual layer initializer. - Registering the class as a custom object in Keras (if the Tensorflow version supports this), so that it does not
need to be supplied in
custom_objects
in the call totf.keras.models.load_model
.
transformers.shape_list
< source >(
tensor: typing.Union[tensorflow.python.framework.ops.Tensor, numpy.ndarray]
)
→
List[int]
Deal with dynamic shape in tensorflow cleanly.