Utilities for GenerationΒΆ
This page lists all the utility functions used by generate()
,
greedy_search()
, sample()
,
beam_search()
, beam_sample()
, and
group_beam_search()
.
Most of those are only useful if you are studying the code of the generate methods in the library.
Generate OutputsΒΆ
The output of generate()
is an instance of a subclass of
ModelOutput
. This output is a data structure containing all the information returned
by generate()
, but that can also be used as tuple or dictionary.
Hereβs an example:
from transformers import GPT2Tokenizer, GPT2LMHeadModel
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
model = GPT2LMHeadModel.from_pretrained('gpt2')
inputs = tokenizer("Hello, my dog is cute and ", return_tensors="pt")
generation_output = model.generate(**inputs, return_dict_in_generate=True, output_scores=True)
The generation_output
object is a GreedySearchDecoderOnlyOutput
, as we can
see in the documentation of that class below, it means it has the following attributes:
sequences
: the generated sequences of tokensscores
(optional): the prediction scores of the language modelling head, for each generation stephidden_states
(optional): the hidden states of the model, for each generation stepattentions
(optional): the attention weights of the model, for each generation step
Here we have the scores
since we passed along output_scores=True
, but we donβt have hidden_states
and
attentions
because we didnβt pass output_hidden_states=True
or output_attentions=True
.
You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you
will get None
. Here for instance generation_output.scores
are all the generated prediction scores of the
language modeling head, and generation_output.attentions
is None
.
When using our generation_output
object as a tuple, it only keeps the attributes that donβt have None
values.
Here, for instance, it has two elements, loss
then logits
, so
generation_output[:2]
will return the tuple (generation_output.sequences, generation_output.scores)
for instance.
When using our generation_output
object as a dictionary, it only keeps the attributes that donβt have None
values. Here, for instance, it has two keys that are sequences
and scores
.
We document here all output types.
GreedySearchOutputΒΆ
-
class
transformers.generation_utils.
GreedySearchDecoderOnlyOutput
(sequences: torch.LongTensor = None, scores: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of decoder-only generation models using greedy search.
- Parameters
sequences (
torch.LongTensor
of shape(batch_size, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step.(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size, config.vocab_size)
).attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, num_heads, generated_length, sequence_length)
.hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, generated_length, hidden_size)
.
-
class
transformers.generation_utils.
GreedySearchEncoderDecoderOutput
(sequences: torch.LongTensor = None, scores: Optional[Tuple[torch.FloatTensor]] = None, encoder_attentions: Optional[Tuple[torch.FloatTensor]] = None, encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None, decoder_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, cross_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, decoder_hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of encoder-decoder generation models using greedy search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
- Parameters
sequences (
torch.LongTensor
of shape(batch_size, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step.(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size, config.vocab_size)
).encoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer of the decoder) of shape(batch_size, num_heads, sequence_length, sequence_length)
.encoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size, sequence_length, hidden_size)
.decoder_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, num_heads, generated_length, sequence_length)
.cross_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, num_heads, generated_length, sequence_length)
.decoder_hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, generated_length, hidden_size)
.
SampleOutputΒΆ
-
class
transformers.generation_utils.
SampleDecoderOnlyOutput
(sequences: torch.LongTensor = None, scores: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of decoder-only generation models using sampling.
- Parameters
sequences (
torch.LongTensor
of shape(batch_size*num_return_sequences, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step.(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size*num_return_sequences, config.vocab_size)
).attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(num_return_sequences*batch_size, num_heads, generated_length, sequence_length)
.hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(num_return_sequences*batch_size, generated_length, hidden_size)
.
-
class
transformers.generation_utils.
SampleEncoderDecoderOutput
(sequences: torch.LongTensor = None, scores: Optional[Tuple[torch.FloatTensor]] = None, encoder_attentions: Optional[Tuple[torch.FloatTensor]] = None, encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None, decoder_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, cross_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, decoder_hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of encoder-decoder generation models using sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
- Parameters
sequences (
torch.LongTensor
of shape(batch_size*num_return_sequences, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed prediction scores of the language modeling head (scores for each vocabulary token before SoftMax) at each generation step.(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size*num_return_sequences, config.vocab_size)
).encoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer of the decoder) of shape(batch_size*num_return_sequences, num_heads, sequence_length, sequence_length)
.encoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size*num_return_sequences, sequence_length, hidden_size)
.decoder_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_return_sequences, num_heads, generated_length, sequence_length)
.cross_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, num_heads, generated_length, sequence_length)
.decoder_hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_return_sequences, generated_length, hidden_size)
.
BeamSearchOutputΒΆ
-
class
transformers.generation_utils.
BeamSearchDecoderOnlyOutput
(sequences: torch.LongTensor = None, sequences_scores: Optional[torch.FloatTensor] = None, scores: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of decoder-only generation models using beam search.
- Parameters
sequences (
torch.LongTensor
of shape(batch_size*num_return_sequences, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.sequences_scores (
torch.FloatTensor
of shape(batch_size*num_return_sequences)
, optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Final beam scores of the generatedsequences
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam .(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size*num_beams*num_return_sequences, config.vocab_size)
).attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams, num_heads, generated_length, sequence_length)
.hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)
.
-
class
transformers.generation_utils.
BeamSearchEncoderDecoderOutput
(sequences: torch.LongTensor = None, sequences_scores: Optional[torch.FloatTensor] = None, scores: Optional[Tuple[torch.FloatTensor]] = None, encoder_attentions: Optional[Tuple[torch.FloatTensor]] = None, encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None, decoder_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, cross_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, decoder_hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of encoder-decoder generation models using beam search. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
- Parameters
sequences (
torch.LongTensor
of shape(batch_size*num_return_sequences, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.sequences_scores (
torch.FloatTensor
of shape(batch_size*num_return_sequences)
, optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Final beam scores of the generatedsequences
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam .(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size*num_beams, config.vocab_size)
).attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) βencoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer of the decoder) of shape(batch_size, num_heads, sequence_length, sequence_length)
.encoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size*num_beams*num_return_sequences, sequence_length, hidden_size)
.decoder_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams*num_return_sequences, num_heads, generated_length, sequence_length)
.cross_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, num_heads, generated_length, sequence_length)
.decoder_hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams*num_return_sequences, generated_length, hidden_size)
.
BeamSampleOutputΒΆ
-
class
transformers.generation_utils.
BeamSampleDecoderOnlyOutput
(sequences: torch.LongTensor = None, sequences_scores: Optional[torch.FloatTensor] = None, scores: Optional[Tuple[torch.FloatTensor]] = None, attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of decoder-only generation models using beam sample.
- Parameters
sequences (
torch.LongTensor
of shape(batch_size*num_return_sequences, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.sequences_scores (
torch.FloatTensor
of shape(batch_size * num_return_sequence)
, optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Final beam scores of the generatedsequences
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam .(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size*num_beams*num_return_sequences, config.vocab_size)
).attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams, num_heads, generated_length, sequence_length)
.hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams, generated_length, hidden_size)
.
-
class
transformers.generation_utils.
BeamSampleEncoderDecoderOutput
(sequences: torch.LongTensor = None, sequences_scores: Optional[torch.FloatTensor] = None, scores: Optional[Tuple[torch.FloatTensor]] = None, encoder_attentions: Optional[Tuple[torch.FloatTensor]] = None, encoder_hidden_states: Optional[Tuple[torch.FloatTensor]] = None, decoder_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, cross_attentions: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, decoder_hidden_states: Optional[Tuple[Tuple[torch.FloatTensor]]] = None)[source]ΒΆ Base class for outputs of encoder-decoder generation models using beam sampling. Hidden states and attention weights of the decoder (respectively the encoder) can be accessed via the encoder_attentions and the encoder_hidden_states attributes (respectively the decoder_attentions and the decoder_hidden_states attributes)
- Parameters
sequences (
torch.LongTensor
of shape(batch_size*num_beams, sequence_length)
) β The generated sequences. The second dimension (sequence_length) is either equal tomax_length
or shorter if all batches finished early due to theeos_token_id
.sequences_scores (
torch.FloatTensor
of shape(batch_size * num_return_sequence)
, optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Final beam scores of the generatedsequences
.scores (
tuple(torch.FloatTensor)
optional, returned whenoutput_scores=True
is passed or whenconfig.output_scores=True
) β Processed beam scores for each vocabulary token at each generation step. Beam scores consisting of log softmax scores for each vocabulary token and sum of log softmax of previously generated tokens in this beam .(max_length,)
-shaped tuple oftorch.FloatTensor
with each tensor of shape(batch_size*num_beams, config.vocab_size)
).encoder_attentions (
tuple(torch.FloatTensor)
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple oftorch.FloatTensor
(one for each layer of the decoder) of shape(batch_size, num_heads, sequence_length, sequence_length)
.encoder_hidden_states (
tuple(torch.FloatTensor)
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple oftorch.FloatTensor
(one for the output of the embeddings + one for the output of each layer) of shape(batch_size*num_beams, sequence_length, hidden_size)
.decoder_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams, num_heads, generated_length, sequence_length)
.cross_attentions (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_attentions=True
is passed orconfig.output_attentions=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size, num_heads, generated_length, sequence_length)
.decoder_hidden_states (
tuple(tuple(torch.FloatTensor))
, optional, returned whenoutput_hidden_states=True
is passed or whenconfig.output_hidden_states=True
) β Tuple (one element for each generated token) of tuples (one element for each layer of the decoder) oftorch.FloatTensor
of shape(batch_size*num_beams, generated_length, hidden_size)
.
LogitsProcessorΒΆ
A LogitsProcessor
can be used to modify the prediction scores of a language model head for
generation.
-
class
transformers.
LogitsProcessor
[source]ΒΆ Abstract base class for all logit processors that can be applied during generation.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Args:
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
): Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.- scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
): Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.
- kwargs:
Additional logits processor specific kwargs.
- input_ids (
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
-
class
transformers.
LogitsProcessorList
[source]ΒΆ This class can be used to create a list of
LogitsProcessor
orLogitsWarper
to subsequently process ascores
input tensor. This class inherits from list and adds a specific __call__ method to apply eachLogitsProcessor
orLogitsProcessor
to the inputs.-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β Additional logits processor specific kwargs.
- Returns
The processed prediction scores.
- Return type
torch.FloatTensor
of shape(batch_size, config.vocab_size)
-
-
class
transformers.
LogitsWarper
[source]ΒΆ Abstract base class for all logit warpers that can be applied during generation with multinomial sampling.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Args:
- input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
): Indices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.- scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
): Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.
- kwargs:
Additional logits processor specific kwargs.
- input_ids (
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for warping logits.
-
-
class
transformers.
MinLengthLogitsProcessor
(min_length: int, eos_token_id: int)[source]ΒΆ transformers.LogitsProcessor
enforcing a min-length by setting EOS probability to 0.- Parameters
min_length (
int
) β The minimum length below which the score ofeos_token_id
is set to-float("Inf")
.eos_token_id (
int
) β The id of the end-of-sequence token.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
TemperatureLogitsWarper
(temperature: float)[source]ΒΆ transformers.LogitsWarper
for temperature (exponential scaling output probability distribution).- Parameters
temperature (
float
) β The value used to module the logits distribution.
-
__call__
(input_ids: torch.Tensor, scores: torch.Tensor) → torch.Tensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for warping logits.
-
class
transformers.
RepetitionPenaltyLogitsProcessor
(penalty: float)[source]ΒΆ transformers.LogitsProcessor
enforcing an exponential penalty on repeated sequences.- Parameters
repetition_penalty (
float
) β The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
TopPLogitsWarper
(top_p: float, filter_value: float = - inf, min_tokens_to_keep: int = 1)[source]ΒΆ transformers.LogitsWarper
that performs top-p, i.e. restricting to top tokens summing to prob_cut_off <= prob_cut_off.- Parameters
top_p (
float
) β If set to < 1, only the most probable tokens with probabilities that add up totop_p
or higher are kept for generation.filter_value (
float
, optional, defaults to-float("Inf")
) β All filtered values will be set to this float value.min_tokens_to_keep (
int
, optional, defaults to 1) β Minimum number of tokens that cannot be filtered.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for warping logits.
-
class
transformers.
TopKLogitsWarper
(top_k: int, filter_value: float = - inf, min_tokens_to_keep: int = 1)[source]ΒΆ transformers.LogitsWarper
that performs top-k, i.e. restricting to the k highest probability elements.- Parameters
top_k (
int
) β The number of highest probability vocabulary tokens to keep for top-k-filtering.filter_value (
float
, optional, defaults to-float("Inf")
) β All filtered values will be set to this float value.min_tokens_to_keep (
int
, optional, defaults to 1) β Minimum number of tokens that cannot be filtered.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for warping logits.
-
class
transformers.
NoRepeatNGramLogitsProcessor
(ngram_size: int)[source]ΒΆ transformers.LogitsProcessor
that enforces no repetition of n-grams. See Fairseq.- Parameters
ngram_size (
int
) β All ngrams of sizengram_size
can only occur once.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
NoBadWordsLogitsProcessor
(bad_words_ids: Iterable[Iterable[int]], eos_token_id: int)[source]ΒΆ transformers.LogitsProcessor
that enforces that specified sequences will never be sampled.- Parameters
bad_words_ids (
List[List[int]]
) β List of list of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, usetokenizer(bad_word, add_prefix_space=True).input_ids
.eos_token_id (
int
) β The id of the end-of-sequence token.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
PrefixConstrainedLogitsProcessor
(prefix_allowed_tokens_fn: Callable[[int, torch.Tensor], List[int]], num_beams: int)[source]ΒΆ transformers.LogitsProcessor
that enforces contrained generation and is useful for prefix-conditioned constrained generation. See Autoregressive Entity Retrieval for more information.- Parameters
prefix_allowed_tokens_fn β (
Callable[[int, torch.Tensor], List[int]]
): This function constraints the beam search to allowed tokens only at each step. This function takes 2 argumentsinputs_ids
and the batch IDbatch_id
. It has to return a list with the allowed tokens for the next generation step conditioned on the previously generated tokensinputs_ids
and the batch IDbatch_id
.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
HammingDiversityLogitsProcessor
(diversity_penalty: float, num_beams: int, num_beam_groups: int)[source]ΒΆ transformers.LogitsProcessor
that enforces diverse beam search. Note that this logits processor is only effective fortransformers.PreTrainedModel.group_beam_search()
. See Diverse Beam Search: Decoding Diverse Solutions from Neural Sequence Models for more details.- Parameters
diversity_penalty (
float
) β This value is subtracted from a beamβs score if it generates a token same as any beam from other group at a particular time. Note thatdiversity_penalty
is only effective ifgroup beam search
is enabled.num_beams (
int
) β Number of beams used for group beam search. See this paper for more details.num_beam_groups (
int
) β Number of groups to dividenum_beams
into in order to ensure diversity among different groups of beams. See this paper for more details.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor, current_tokens: torch.LongTensor, beam_group_idx: int) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
ForcedBOSTokenLogitsProcessor
(bos_token_id: int)[source]ΒΆ LogitsProcessor
that enforces the specified token as the first generated token.- Parameters
bos_token_id (
int
) β The id of the token to force as the first generated token.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
ForcedEOSTokenLogitsProcessor
(max_length: int, eos_token_id: int)[source]ΒΆ LogitsProcessor
that enforces the specified token as the last generated token whenmax_length
is reached.- Parameters
max_length (
int
) β The maximum length of the sequence to be generated.eos_token_id (
int
) β The id of the token to force as the last generated token whenmax_length
is reached.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
class
transformers.
InfNanRemoveLogitsProcessor
[source]ΒΆ LogitsProcessor
that removes allnan
andinf
values to avoid the generation method to fail. Note that using the logits processor should only be used if necessary since it can slow down the generation method.max_length
is reached.-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor) → torch.FloatTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β
Additional logits processor specific kwargs.
- Return:
torch.FloatTensor
of shape(batch_size, config.vocab_size)
: The processed prediction scores.
Torch method for processing logits.
-
StoppingCriteriaΒΆ
A StoppingCriteria
can be used to change when to stop generation (other than EOS token).
-
class
transformers.
StoppingCriteria
[source]ΒΆ Abstract base class for all stopping criteria that can be applied during generation.
-
__call__
(input_ids: torch.LongTensor, score: torch.FloatTensor, **kwargs) → bool[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β Additional stopping critera specific kwargs.
- Returns
bool
.False
indicates we should continue,True
indicates we should stop.
-
-
class
transformers.
StoppingCriteriaList
[source]ΒΆ -
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) → bool[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β Additional stopping critera specific kwargs.
- Returns
bool
.False
indicates we should continue,True
indicates we should stop.
-
-
class
transformers.
MaxLengthCriteria
(max_length: int)[source]ΒΆ This class can be used to stop generation whenever the full generated number of tokens exceeds
max_length
. Keep in mind for decoder-only type of transformers, this will include the initial prompted tokens.- Parameters
max_length (
int
) β The maximum length that the output sequence can have in number of tokens.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) → bool[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β Additional stopping critera specific kwargs.
- Returns
bool
.False
indicates we should continue,True
indicates we should stop.
-
class
transformers.
MaxTimeCriteria
(max_time: float, initial_timestamp: Optional[float] = None)[source]ΒΆ This class can be used to stop generation whenever the full generation exceeds some amount of time. By default, the time will start being counted when you initialize this function. You can override this by passing an
initial_time
.- Parameters
max_time (
float
) β The maximum allowed time in seconds for the generation.initial_time (
float
, optional, defaults totime.time()
) β The start of the generation allowed time.
-
__call__
(input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) → bool[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using
BertTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.scores (
torch.FloatTensor
of shape(batch_size, config.vocab_size)
) β Prediction scores of a language modeling head. These can be scores for each vocabulary token before SoftMax or scores for each vocabulary token after SoftMax.kwargs β Additional stopping critera specific kwargs.
- Returns
bool
.False
indicates we should continue,True
indicates we should stop.
BeamSearchΒΆ
-
class
transformers.
BeamScorer
[source]ΒΆ Abstract base class for all beam scorers that are used for
beam_search()
andbeam_sample()
.-
abstract
finalize
(input_ids: torch.LongTensor, next_scores: torch.FloatTensor, next_tokens: torch.LongTensor, next_indices: torch.LongTensor, **kwargs) → torch.LongTensor[source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size * num_beams, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from
PreTrainedTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.final_beam_scores (
torch.FloatTensor
of shape(batch_size * num_beams)
) β The final scores of all non-finished beams.final_beam_tokens (
torch.FloatTensor
of shape(batch_size * num_beams)
) β The last tokens to be added to the non-finished beam_hypotheses.final_beam_indices (
torch.FloatTensor
of shape(batch_size * num_beams)
) β The beam indices indicating to which beam thefinal_beam_tokens
shall be added.pad_token_id (
int
, optional) β The id of the padding token.eos_token_id (
int
, optional) β The id of the end-of-sequence token.
- Returns
The generated sequences. The second dimension (sequence_length) is either equal to
max_length
or shorter if all batches finished early due to theeos_token_id
.- Return type
torch.LongTensor
of shape(batch_size * num_return_sequences, sequence_length)
-
abstract
process
(input_ids: torch.LongTensor, next_scores: torch.FloatTensor, next_tokens: torch.LongTensor, next_indices: torch.LongTensor, **kwargs) → Tuple[torch.Tensor][source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size * num_beams, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from
PreTrainedTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.next_scores (
torch.FloatTensor
of shape(batch_size, 2 * num_beams)
) β Current scores of the top2 * num_beams
non-finished beam hypotheses.next_tokens (
torch.LongTensor
of shape(batch_size, 2 * num_beams)
) βinput_ids
of the tokens corresponding to the top2 * num_beams
non-finished beam hypotheses.next_indices (
torch.LongTensor
of shape(batch_size, 2 * num_beams)
) β Beam indices indicating to which beam hypothesis thenext_tokens
correspond.pad_token_id (
int
, optional) β The id of the padding token.eos_token_id (
int
, optional) β The id of the end-of-sequence token.
- Returns
A dictionary composed of the fields as defined above:
next_beam_scores (
torch.FloatTensor
of shape(batch_size * num_beams)
) β Updated scores of all non-finished beams.next_beam_tokens (
torch.FloatTensor
of shape(batch_size * num_beams)
) β Next tokens to be added to the non-finished beam_hypotheses.next_beam_indices (
torch.FloatTensor
of shape(batch_size * num_beams)
) β Beam indices indicating to which beam the next tokens shall be added.
- Return type
UserDict
-
abstract
-
class
transformers.
BeamSearchScorer
(batch_size: int, max_length: int, num_beams: int, device: torch.device, length_penalty: Optional[float] = 1.0, do_early_stopping: Optional[bool] = False, num_beam_hyps_to_keep: Optional[int] = 1, num_beam_groups: Optional[int] = 1)[source]ΒΆ transformers.BeamScorer
implementing standard beam search decoding.Adapted in part from Facebookβs XLM beam search code.
Reference for the diverse beam search algorithm and implementation Ashwin Kalyanβs DBS implementation
- Parameters
batch_size (
int
) β Batch Size ofinput_ids
for which standard beam search decoding is run in parallel.max_length (
int
) β The maximum length of the sequence to be generated.num_beams (
int
) β Number of beams for beam search.device (
torch.device
) β Defines the device type (e.g.,"cpu"
or"cuda"
) on which this instance ofBeamSearchScorer
will be allocated.length_penalty (
float
, optional, defaults to 1.0) β Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer sequences.do_early_stopping (
bool
, optional, defaults toFalse
) β Whether to stop the beam search when at leastnum_beams
sentences are finished per batch or not.num_beam_hyps_to_keep (
int
, optional, defaults to 1) β The number of beam hypotheses that shall be returned upon callingfinalize()
.num_beam_groups (
int
) β Number of groups to dividenum_beams
into in order to ensure diversity among different groups of beams. See this paper for more details.
-
finalize
(input_ids: torch.LongTensor, final_beam_scores: torch.FloatTensor, final_beam_tokens: torch.LongTensor, final_beam_indices: torch.LongTensor, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None) → Tuple[torch.LongTensor][source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size * num_beams, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from
PreTrainedTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.final_beam_scores (
torch.FloatTensor
of shape(batch_size * num_beams)
) β The final scores of all non-finished beams.final_beam_tokens (
torch.FloatTensor
of shape(batch_size * num_beams)
) β The last tokens to be added to the non-finished beam_hypotheses.final_beam_indices (
torch.FloatTensor
of shape(batch_size * num_beams)
) β The beam indices indicating to which beam thefinal_beam_tokens
shall be added.pad_token_id (
int
, optional) β The id of the padding token.eos_token_id (
int
, optional) β The id of the end-of-sequence token.
- Returns
The generated sequences. The second dimension (sequence_length) is either equal to
max_length
or shorter if all batches finished early due to theeos_token_id
.- Return type
torch.LongTensor
of shape(batch_size * num_return_sequences, sequence_length)
-
process
(input_ids: torch.LongTensor, next_scores: torch.FloatTensor, next_tokens: torch.LongTensor, next_indices: torch.LongTensor, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None) → Tuple[torch.Tensor][source]ΒΆ - Parameters
input_ids (
torch.LongTensor
of shape(batch_size * num_beams, sequence_length)
) βIndices of input sequence tokens in the vocabulary.
Indices can be obtained using any class inheriting from
PreTrainedTokenizer
. Seetransformers.PreTrainedTokenizer.encode()
andtransformers.PreTrainedTokenizer.__call__()
for details.next_scores (
torch.FloatTensor
of shape(batch_size, 2 * num_beams)
) β Current scores of the top2 * num_beams
non-finished beam hypotheses.next_tokens (
torch.LongTensor
of shape(batch_size, 2 * num_beams)
) βinput_ids
of the tokens corresponding to the top2 * num_beams
non-finished beam hypotheses.next_indices (
torch.LongTensor
of shape(batch_size, 2 * num_beams)
) β Beam indices indicating to which beam hypothesis thenext_tokens
correspond.pad_token_id (
int
, optional) β The id of the padding token.eos_token_id (
int
, optional) β The id of the end-of-sequence token.
- Returns
A dictionary composed of the fields as defined above:
next_beam_scores (
torch.FloatTensor
of shape(batch_size * num_beams)
) β Updated scores of all non-finished beams.next_beam_tokens (
torch.FloatTensor
of shape(batch_size * num_beams)
) β Next tokens to be added to the non-finished beam_hypotheses.next_beam_indices (
torch.FloatTensor
of shape(batch_size * num_beams)
) β Beam indices indicating to which beam the next tokens shall be added.
- Return type
UserDict
UtilitiesΒΆ
-
transformers.
top_k_top_p_filtering
(logits: torch.FloatTensor, top_k: int = 0, top_p: float = 1.0, filter_value: float = - inf, min_tokens_to_keep: int = 1) → torch.FloatTensor[source]ΒΆ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
- Parameters
logits β logits distribution shape (batch size, vocabulary size)
top_k > 0 (if) β keep only top k tokens with highest probability (top-k filtering).
top_p < 1.0 (if) β keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
sure we keep at least min_tokens_to_keep per batch example in the output (Make) β
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317
-
transformers.
tf_top_k_top_p_filtering
(logits, top_k=0, top_p=1.0, filter_value=- inf, min_tokens_to_keep=1)[source]ΒΆ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering
- Parameters
logits β logits distribution shape (batch size, vocabulary size)
top_k > 0 (if) β keep only top k tokens with highest probability (top-k filtering).
top_p < 1.0 (if) β keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751)
sure we keep at least min_tokens_to_keep per batch example in the output (Make) β
From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317