Models¶
The base classes PreTrainedModel
, TFPreTrainedModel
, and
FlaxPreTrainedModel
implement the common methods for loading/saving a model either from a local
file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace’s AWS
S3 repository).
PreTrainedModel
and TFPreTrainedModel
also implement a few methods which
are common among all the models to:
resize the input token embeddings when new tokens are added to the vocabulary
prune the attention heads of the model.
The other methods that are common to each model are defined in ModuleUtilsMixin
(for the PyTorch models) and TFModuleUtilsMixin
(for the TensorFlow models) or
for text generation, GenerationMixin
(for the PyTorch models),
TFGenerationMixin
(for the TensorFlow models) and
FlaxGenerationMixin
(for the Flax/JAX models).
PreTrainedModel¶
-
class
transformers.
PreTrainedModel
(config: transformers.configuration_utils.PretrainedConfig, *inputs, **kwargs)[source]¶ Base class for all models.
PreTrainedModel
takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to:resize the input embeddings,
prune heads in the self-attention heads.
Class attributes (overridden by derived classes):
config_class (
PretrainedConfig
) – A subclass ofPretrainedConfig
to use as configuration class for this model architecture.load_tf_weights (
Callable
) – A python method for loading a TensorFlow checkpoint in a PyTorch model, taking as arguments:model (
PreTrainedModel
) – An instance of the model on which to load the TensorFlow checkpoint.config (
PreTrainedConfig
) – An instance of the configuration associated to the model.path (
str
) – A path to the TensorFlow checkpoint.
base_model_prefix (
str
) – A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model.is_parallelizable (
bool
) – A flag indicating whether this model supports model parallelization.
-
property
base_model
¶ The main body of the model.
- Type
torch.nn.Module
-
property
dummy_inputs
¶ Dummy inputs to do a forward pass in the network.
- Type
Dict[str, torch.Tensor]
-
classmethod
from_pretrained
(pretrained_model_name_or_path: Optional[Union[str, os.PathLike]], *model_args, **kwargs)[source]¶ Instantiate a pretrained pytorch model from a pre-trained model configuration.
The model is set in evaluation mode by default using
model.eval()
(Dropout modules are deactivated). To train the model, you should first set it back in training mode withmodel.train()
.The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.
The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
, optional) –Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.A path or url to a model folder containing a flax checkpoint file in .msgpack format (e.g,
./flax_model/
containingflax_model.msgpack
). In this case,from_flax
should be set toTrue
.None
if you are both providing the configuration and state dictionary (resp. with keyword argumentsconfig
andstate_dict
).
model_args (sequence of positional arguments, optional) – All remaning positional arguments will be passed to the underlying model’s
__init__
method.config (
Union[PretrainedConfig, str, os.PathLike]
, optional) –Can be either:
an instance of a class derived from
PretrainedConfig
,a string or path valid as input to
from_pretrained()
.
Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
state_dict (
Dict[str, torch.Tensor]
, optional) –A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using
save_pretrained()
andfrom_pretrained()
is not a simpler option.cache_dir (
Union[str, os.PathLike]
, optional) – Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_tf (
bool
, optional, defaults toFalse
) – Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).from_flax (
bool
, optional, defaults toFalse
) – Load the model weights from a Flax checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) – Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) – Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str], `optional
) – A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) – Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) – Whether or not to only look at local files (i.e., do not try to download the model).use_auth_token (
str
or bool, optional) – The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
).revision (
str
, optional, defaults to"main"
) – The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.mirror (
str
, optional) – Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information._fast_init (
bool
, optional, defaults to :obj:`True) –Whether or not to disable fast initialization.
Warning
One should only disable _fast_init to ensure backwards compatibility with
transformers.__version__ < 4.6.0
for seeded model initialization. This argument will be removed at the next major version. See pull request 11471 for more information.kwargs (remaining dictionary of keyword arguments, optional) –
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
Note
Passing
use_auth_token=True
is required when you want to use a private model.Note
Activate the special “offline-mode” to use this method in a firewalled environment.
Examples:
>>> from transformers import BertConfig, BertModel >>> # Download model and configuration from huggingface.co and cache. >>> model = BertModel.from_pretrained('bert-base-uncased') >>> # Model was saved using `save_pretrained('./test/saved_model/')` (for example purposes, not runnable). >>> model = BertModel.from_pretrained('./test/saved_model/') >>> # Update configuration during loading. >>> model = BertModel.from_pretrained('bert-base-uncased', output_attentions=True) >>> assert model.config.output_attentions == True >>> # Loading from a TF checkpoint file instead of a PyTorch model (slower, for example purposes, not runnable). >>> config = BertConfig.from_json_file('./tf_model/my_tf_model_config.json') >>> model = BertModel.from_pretrained('./tf_model/my_tf_checkpoint.ckpt.index', from_tf=True, config=config) >>> # Loading from a Flax checkpoint file instead of a PyTorch model (slower) >>> model = BertModel.from_pretrained('bert-base-uncased', from_flax=True)
-
get_input_embeddings
() → torch.nn.modules.module.Module[source]¶ Returns the model’s input embeddings.
- Returns
A torch module mapping vocabulary to hidden states.
- Return type
nn.Module
-
get_output_embeddings
() → torch.nn.modules.module.Module[source]¶ Returns the model’s output embeddings.
- Returns
A torch module mapping hidden states to vocabulary.
- Return type
nn.Module
-
prune_heads
(heads_to_prune: Dict[int, List[int]])[source]¶ Prunes heads of the base model.
- Parameters
heads_to_prune (
Dict[int, List[int]]
) – Dictionary with keys being selected layer indices (int
) and associated values being the list of heads to prune in said layer (list ofint
). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
-
resize_token_embeddings
(new_num_tokens: Optional[int] = None) → torch.nn.modules.sparse.Embedding[source]¶ Resizes input token embeddings matrix of the model if
new_num_tokens != config.vocab_size
.Takes care of tying weights embeddings afterwards if the model class has a
tie_weights()
method.- Parameters
new_num_tokens (
int
, optional) – The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided orNone
, just returns a pointer to the input tokenstorch.nn.Embedding
module of the model without doing anything.- Returns
Pointer to the input tokens Embeddings Module of the model.
- Return type
torch.nn.Embedding
-
save_pretrained
(save_directory: Union[str, os.PathLike], save_config: bool = True, state_dict: Optional[dict] = None, save_function: Callable = <function save>, push_to_hub: bool = False, **kwargs)[source]¶ Save a model and its configuration file to a directory, so that it can be re-loaded using the :func:`~transformers.PreTrainedModel.from_pretrained` class method.
- Parameters
save_directory (
str
oros.PathLike
) – Directory to which to save. Will be created if it doesn’t exist.save_config (
bool
, optional, defaults toTrue
) – Whether or not to save the config of the model. Useful when in distributed training like TPUs and need to call this function on all processes. In this case, setsave_config=True
only on the main process to avoid race conditions.state_dict (nested dictionary of
torch.Tensor
) – The state dictionary of the model to save. Will default toself.state_dict()
, but can be used to only save parts of the model or if special precautions need to be taken when recovering the state dictionary of a model (like when using model parallelism).save_function (
Callable
) – The function to use to save the state dictionary. Useful on distributed training like TPUs when one need to replacetorch.save
by another method.push_to_hub (
bool
, optional, defaults toFalse
) – Whether or not to push your model to the Hugging Face model hub after saving it.kwargs – Additional key word arguments passed along to the
push_to_hub()
method.
ModuleUtilsMixin¶
-
class
transformers.modeling_utils.
ModuleUtilsMixin
[source]¶ A few utilities for
torch.nn.Modules
, to be used as a mixin.-
add_memory_hooks
()[source]¶ Add a memory hook before and after each sub-module forward pass to record increase in memory consumption.
Increase in memory consumption is stored in a
mem_rss_diff
attribute for each module and can be reset to zero withmodel.reset_memory_hooks_state()
.
-
property
device
¶ The device on which the module is (assuming that all the module parameters are on the same device).
- Type
torch.device
-
property
dtype
¶ The dtype of the module (assuming that all the module parameters have the same dtype).
- Type
torch.dtype
-
estimate_tokens
(input_dict: Dict[str, Union[torch.Tensor, Any]]) → int[source]¶ Helper function to estimate the total number of tokens from the model inputs.
- Parameters
inputs (
dict
) – The model inputs.- Returns
The total number of tokens.
- Return type
int
-
floating_point_ops
(input_dict: Dict[str, Union[torch.Tensor, Any]], exclude_embeddings: bool = True) → int[source]¶ Get number of (optionally, non-embeddings) floating-point operations for the forward and backward passes of a batch with this transformer model. Default approximation neglects the quadratic dependency on the number of tokens (valid if
12 * d_model << sequence_length
) as laid out in this paper section 2.1. Should be overridden for transformers with parameter re-use e.g. Albert or Universal Transformers, or if doing long-range modeling with very high sequence lengths.- Parameters
batch_size (
int
) – The batch size for the forward pass.sequence_length (
int
) – The number of tokens in each line of the batch.exclude_embeddings (
bool
, optional, defaults toTrue
) – Whether or not to count embedding and softmax operations.
- Returns
The number of floating-point operations.
- Return type
int
-
get_extended_attention_mask
(attention_mask: torch.Tensor, input_shape: Tuple[int], device: <property object at 0x7fe3654bb818>) → torch.Tensor[source]¶ Makes broadcastable attention and causal masks so that future and masked tokens are ignored.
- Parameters
attention_mask (
torch.Tensor
) – Mask with ones indicating tokens to attend to, zeros for tokens to ignore.input_shape (
Tuple[int]
) – The shape of the input to the model.device – (
torch.device
): The device of the input to the model.
- Returns
torch.Tensor
The extended attention mask, with a the same dtype asattention_mask.dtype
.
-
get_head_mask
(head_mask: Optional[torch.Tensor], num_hidden_layers: int, is_attention_chunked: bool = False) → torch.Tensor[source]¶ Prepare the head mask if needed.
- Parameters
head_mask (
torch.Tensor
with shape[num_heads]
or[num_hidden_layers x num_heads]
, optional) – The mask indicating if we should keep the heads or not (1.0 for keep, 0.0 for discard).num_hidden_layers (
int
) – The number of hidden layers in the model.is_attention_chunked – (
bool
, optional, defaults toFalse
): Whether or not the attentions scores are computed by chunks or not.
- Returns
torch.Tensor
with shape[num_hidden_layers x batch x num_heads x seq_length x seq_length]
or list with[None]
for each layer.
-
invert_attention_mask
(encoder_attention_mask: torch.Tensor) → torch.Tensor[source]¶ Invert an attention mask (e.g., switches 0. and 1.).
- Parameters
encoder_attention_mask (
torch.Tensor
) – An attention mask.- Returns
The inverted attention mask.
- Return type
torch.Tensor
-
num_parameters
(only_trainable: bool = False, exclude_embeddings: bool = False) → int[source]¶ Get number of (optionally, trainable or non-embeddings) parameters in the module.
- Parameters
only_trainable (
bool
, optional, defaults toFalse
) – Whether or not to return only the number of trainable parametersexclude_embeddings (
bool
, optional, defaults toFalse
) – Whether or not to return only the number of non-embeddings parameters
- Returns
The number of parameters.
- Return type
int
-
reset_memory_hooks_state
()[source]¶ Reset the
mem_rss_diff
attribute of each module (seeadd_memory_hooks()
).
-
TFPreTrainedModel¶
-
class
transformers.
TFPreTrainedModel
(*args, **kwargs)[source]¶ Base class for all TF models.
TFPreTrainedModel
takes care of storing the configuration of the models and handles methods for loading, downloading and saving models as well as a few methods common to all models to:resize the input embeddings,
prune heads in the self-attention heads.
Class attributes (overridden by derived classes):
config_class (
PretrainedConfig
) – A subclass ofPretrainedConfig
to use as configuration class for this model architecture.base_model_prefix (
str
) – A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model.
-
property
dummy_inputs
¶ Dummy inputs to build the network.
- Returns
The dummy inputs.
- Return type
Dict[str, tf.Tensor]
-
classmethod
from_pretrained
(pretrained_model_name_or_path, *model_args, **kwargs)[source]¶ Instantiate a pretrained TF 2.0 model from a pre-trained model configuration.
The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.
The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.
- Parameters
pretrained_model_name_or_path (
str
, optional) –Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a PyTorch state_dict save file (e.g,
./pt_model/pytorch_model.bin
). In this case,from_pt
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model using the provided conversion scripts and loading the TensorFlow model afterwards.None
if you are both providing the configuration and state dictionary (resp. with keyword argumentsconfig
andstate_dict
).
model_args (sequence of positional arguments, optional) – All remaning positional arguments will be passed to the underlying model’s
__init__
method.config (
Union[PretrainedConfig, str]
, optional) –Can be either:
an instance of a class derived from
PretrainedConfig
,a string valid as input to
from_pretrained()
.
Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
from_pt – (
bool
, optional, defaults toFalse
): Load the model weights from a PyTorch state_dict save file (see docstring ofpretrained_model_name_or_path
argument).cache_dir (
str
, optional) – Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.force_download (
bool
, optional, defaults toFalse
) – Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) – Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies – (
Dict[str, str], `optional
): A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.output_loading_info (
bool
, optional, defaults toFalse
) – Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.local_files_only (
bool
, optional, defaults toFalse
) – Whether or not to only look at local files (e.g., not try doanloading the model).use_auth_token (
str
or bool, optional) – The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
).revision (
str
, optional, defaults to"main"
) – The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.mirror (
str
, optional) – Mirror source to accelerate downloads in China. If you are from China and have an accessibility problem, you can set this option to resolve it. Note that we do not guarantee the timeliness or safety. Please refer to the mirror site for more information.kwargs (remaining dictionary of keyword arguments, optional) –
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
Note
Passing
use_auth_token=True
is required when you want to use a private model.Examples:
>>> from transformers import BertConfig, TFBertModel >>> # Download model and configuration from huggingface.co and cache. >>> model = TFBertModel.from_pretrained('bert-base-uncased') >>> # Model was saved using `save_pretrained('./test/saved_model/')` (for example purposes, not runnable). >>> model = TFBertModel.from_pretrained('./test/saved_model/') >>> # Update configuration during loading. >>> model = TFBertModel.from_pretrained('bert-base-uncased', output_attentions=True) >>> assert model.config.output_attentions == True >>> # Loading from a Pytorch model file instead of a TensorFlow checkpoint (slower, for example purposes, not runnable). >>> config = BertConfig.from_json_file('./pt_model/my_pt_model_config.json') >>> model = TFBertModel.from_pretrained('./pt_model/my_pytorch_model.bin', from_pt=True, config=config)
-
get_bias
() → Union[None, Dict[str, tensorflow.python.ops.variables.Variable]][source]¶ Dict of bias attached to an LM head. The key represents the name of the bias attribute.
- Returns
The weights representing the bias, None if not an LM model.
- Return type
tf.Variable
-
get_input_embeddings
() → tensorflow.python.keras.engine.base_layer.Layer[source]¶ Returns the model’s input embeddings layer.
- Returns
The embeddings layer mapping vocabulary to hidden states.
- Return type
tf.Variable
-
get_lm_head
() → tensorflow.python.keras.engine.base_layer.Layer[source]¶ The LM Head layer. This method must be overwritten by all the models that have a lm head.
- Returns
The LM head layer if the model has one, None if not.
- Return type
tf.keras.layers.Layer
-
get_output_embeddings
() → Union[None, tensorflow.python.keras.engine.base_layer.Layer][source]¶ Returns the model’s output embeddings
- Returns
The new weights mapping vocabulary to hidden states.
- Return type
tf.Variable
-
get_output_layer_with_bias
() → Union[None, tensorflow.python.keras.engine.base_layer.Layer][source]¶ Get the layer that handles a bias attribute in case the model has an LM head with weights tied to the embeddings
- Returns
The layer that handles the bias, None if not an LM model.
- Return type
tf.keras.layers.Layer
-
get_prefix_bias_name
() → Union[None, str][source]¶ Get the concatenated _prefix name of the bias from the model name to the parent layer
- Returns
The _prefix name of the bias.
- Return type
str
-
prune_heads
(heads_to_prune)[source]¶ Prunes heads of the base model.
- Parameters
heads_to_prune (
Dict[int, List[int]]
) – Dictionary with keys being selected layer indices (int
) and associated values being the list of heads to prune in said layer (list ofint
). For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.
-
resize_token_embeddings
(new_num_tokens=None) → tensorflow.python.ops.variables.Variable[source]¶ Resizes input token embeddings matrix of the model if
new_num_tokens != config.vocab_size
.Takes care of tying weights embeddings afterwards if the model class has a
tie_weights()
method.- Parameters
new_num_tokens (
int
, optional) – The number of new tokens in the embedding matrix. Increasing the size will add newly initialized vectors at the end. Reducing the size will remove vectors from the end. If not provided orNone
, just returns a pointer to the input tokenstf.Variable
module of the model without doing anything.- Returns
Pointer to the input tokens Embeddings Module of the model.
- Return type
tf.Variable
-
save_pretrained
(save_directory, saved_model=False, version=1, push_to_hub=False, **kwargs)[source]¶ Save a model and its configuration file to a directory, so that it can be re-loaded using the
from_pretrained()
class method.- Parameters
save_directory (
str
) – Directory to which to save. Will be created if it doesn’t exist.saved_model (
bool
, optional, defaults toFalse
) – If the model has to be saved in saved model format as well or not.version (
int
, optional, defaults to 1) – The version of the saved model. A saved model needs to be versioned in order to be properly loaded by TensorFlow Serving as detailed in the official documentation https://www.tensorflow.org/tfx/serving/serving_basicpush_to_hub (
bool
, optional, defaults toFalse
) – Whether or not to push your model to the Hugging Face model hub after saving it.kwargs – Additional key word arguments passed along to the
push_to_hub()
method.
-
serving
(inputs)[source]¶ Method used for serving the model.
- Parameters
inputs (
Dict[str, tf.Tensor]
) – The input of the saved model as a dictionary of tensors.
-
serving_output
()[source]¶ Prepare the output of the saved model. Each model must implement this function.
- Parameters
output (
TFBaseModelOutput
) – The output returned by the model.
-
set_bias
(value)[source]¶ Set all the bias in the LM head.
- Parameters
value (
Dict[tf.Variable]
) – All the new bias attached to an LM head.
TFModelUtilsMixin¶
-
class
transformers.modeling_tf_utils.
TFModelUtilsMixin
[source]¶ A few utilities for
tf.keras.Model
, to be used as a mixin.-
num_parameters
(only_trainable: bool = False) → int[source]¶ Get the number of (optionally, trainable) parameters in the model.
- Parameters
only_trainable (
bool
, optional, defaults toFalse
) – Whether or not to return only the number of trainable parameters- Returns
The number of parameters.
- Return type
int
-
FlaxPreTrainedModel¶
-
class
transformers.
FlaxPreTrainedModel
(config: transformers.configuration_utils.PretrainedConfig, module: flax.linen.module.Module, input_shape: Tuple = (1, 1), seed: int = 0, dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>)[source]¶ Base class for all models.
FlaxPreTrainedModel
takes care of storing the configuration of the models and handles methods for loading, downloading and saving models.Class attributes (overridden by derived classes):
config_class (
PretrainedConfig
) – A subclass ofPretrainedConfig
to use as configuration class for this model architecture.base_model_prefix (
str
) – A string indicating the attribute associated to the base model in derived classes of the same architecture adding modules on top of the base model.
-
classmethod
from_pretrained
(pretrained_model_name_or_path: Union[str, os.PathLike], dtype: numpy.dtype = <class 'jax._src.numpy.lax_numpy.float32'>, *model_args, **kwargs)[source]¶ Instantiate a pretrained flax model from a pre-trained model configuration.
The warning Weights from XXX not initialized from pretrained model means that the weights of XXX do not come pretrained with the rest of the model. It is up to you to train those weights with a downstream fine-tuning task.
The warning Weights from XXX not used in YYY means that the layer XXX is not used by YYY, therefore those weights are discarded.
- Parameters
pretrained_model_name_or_path (
str
oros.PathLike
) –Can be either:
A string, the model id of a pretrained model hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
.A path to a directory containing model weights saved using
save_pretrained()
, e.g.,./my_model_directory/
.A path or url to a pt index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_pt
should be set toTrue
.
model_args (sequence of positional arguments, optional) – All remaning positional arguments will be passed to the underlying model’s
__init__
method.config (
Union[PretrainedConfig, str, os.PathLike]
, optional) –Can be either:
an instance of a class derived from
PretrainedConfig
,a string or path valid as input to
from_pretrained()
.
Configuration for the model to use instead of an automatically loaded configuation. Configuration can be automatically loaded when:
The model is a model provided by the library (loaded with the model id string of a pretrained model).
The model was saved using
save_pretrained()
and is reloaded by supplying the save directory.The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
cache_dir (
Union[str, os.PathLike]
, optional) – Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.from_pt (
bool
, optional, defaults toFalse
) – Load the model weights from a PyTorch checkpoint save file (see docstring ofpretrained_model_name_or_path
argument).force_download (
bool
, optional, defaults toFalse
) – Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist.resume_download (
bool
, optional, defaults toFalse
) – Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists.proxies (
Dict[str, str], `optional
) – A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.local_files_only (
bool
, optional, defaults toFalse
) – Whether or not to only look at local files (i.e., do not try to download the model).revision (
str
, optional, defaults to"main"
) – The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.kwargs (remaining dictionary of keyword arguments, optional) –
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done)If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()
). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
Examples:
>>> from transformers import BertConfig, FlaxBertModel >>> # Download model and configuration from huggingface.co and cache. >>> model = FlaxBertModel.from_pretrained('bert-base-cased') >>> # Model was saved using `save_pretrained('./test/saved_model/')` (for example purposes, not runnable). >>> model = FlaxBertModel.from_pretrained('./test/saved_model/') >>> # Loading from a PyTorch checkpoint file instead of a PyTorch model (slower, for example purposes, not runnable). >>> config = BertConfig.from_json_file('./pt_model/config.json') >>> model = FlaxBertModel.from_pretrained('./pt_model/pytorch_model.bin', from_pt=True, config=config)
-
save_pretrained
(save_directory: Union[str, os.PathLike], params=None, push_to_hub=False, **kwargs)[source]¶ Save a model and its configuration file to a directory, so that it can be re-loaded using the :func:`~transformers.FlaxPreTrainedModel.from_pretrained` class method
- Parameters
save_directory (
str
oros.PathLike
) – Directory to which to save. Will be created if it doesn’t exist.push_to_hub (
bool
, optional, defaults toFalse
) – Whether or not to push your model to the Hugging Face model hub after saving it.kwargs – Additional key word arguments passed along to the
push_to_hub()
method.
Generation¶
-
class
transformers.generation_utils.
GenerationMixin
[source]¶ A class containing all of the functions supporting generation, to be used as a mixin in
PreTrainedModel
.-
adjust_logits_during_generation
(logits: torch.FloatTensor, **kwargs) → torch.FloatTensor[source]¶ Implement in subclasses of
PreTrainedModel
for custom behavior to adjust the logits in the generate method.
-
beam_sample
(input_ids: torch.LongTensor, beam_scorer: transformers.generation_beam_search.BeamScorer, logits_processor: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation_stopping_criteria.StoppingCriteriaList] = None, logits_warper: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = None, **model_kwargs) → Union[transformers.generation_utils.BeamSampleEncoderDecoderOutput, transformers.generation_utils.BeamSampleDecoderOnlyOutput, torch.LongTensor][source]¶ Generates sequences for models with a language modeling head using beam search with multinomial sampling.
- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytorch.LongTensor
of shape(1,)
.beam_scorer (
BeamScorer
) – A derived instance ofBeamScorer
that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation ofBeamScorer
should be read.logits_processor (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsProcessor
used to modify the prediction scores of the language modeling head applied at each generation step.stopping_criteria (
StoppingCriteriaList
, optional) – An instance ofStoppingCriteriaList
. List of instances of class derived fromStoppingCriteria
used to tell if the generation loop should stop.logits_warper (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsWarper
used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.max_length (
int
, optional, defaults to 20) – DEPRECATED. Uselogits_processor
orstopping_criteria
directly to cap the number of generated tokens. The maximum length of the sequence to be generated.pad_token_id (
int
, optional) – The id of the padding token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.output_attentions (
bool
, optional, defaults to False) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more details.output_hidden_states (
bool
, optional, defaults to False) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more details.output_scores (
bool
, optional, defaults to False) – Whether or not to return the prediction scores. Seescores
under returned tensors for more details.return_dict_in_generate (
bool
, optional, defaults to False) – Whether or not to return aModelOutput
instead of a plain tuple.synced_gpus (
bool
, optional, defaults toFalse
) – Whether to continue running the while loop until max_length (needed for ZeRO stage 3)model_kwargs – Additional model specific kwargs will be forwarded to the
forward
function of the model. If model is an encoder-decoder model the kwargs should includeencoder_outputs
.
- Returns
BeamSampleDecoderOnlyOutput
,BeamSampleEncoderDecoderOutput
or obj:torch.LongTensor: Atorch.LongTensor
containing the generated tokens (default behaviour) or aBeamSampleDecoderOnlyOutput
ifmodel.config.is_encoder_decoder=False
andreturn_dict_in_generate=True
or aBeamSampleEncoderDecoderOutput
ifmodel.config.is_encoder_decoder=True
.
Examples:
>>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... TopKLogitsWarper, ... TemperatureLogitsWarper, ... BeamSearchScorer, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> # lets run beam search using 3 beams >>> num_beams = 3 >>> # define decoder start token ids >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> # add encoder_outputs to model keyword arguments >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()(encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True) ... } >>> # instantiate beam scorer >>> beam_scorer = BeamSearchScorer( ... batch_size=1, ... max_length=model.config.max_length, ... num_beams=num_beams, ... device=model.device, ... ) >>> # instantiate logits processors >>> logits_processor = LogitsProcessorList([ ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id) ... ]) >>> # instantiate logits processors >>> logits_warper = LogitsProcessorList([ ... TopKLogitsWarper(50), ... TemperatureLogitsWarper(0.7), ... ]) >>> outputs = model.beam_sample( ... input_ids, beam_scorer, logits_processor=logits_processor, logits_warper=logits_warper, **model_kwargs ... ) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
-
beam_search
(input_ids: torch.LongTensor, beam_scorer: transformers.generation_beam_search.BeamScorer, logits_processor: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation_stopping_criteria.StoppingCriteriaList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = None, **model_kwargs) → Union[transformers.generation_utils.BeamSearchEncoderDecoderOutput, transformers.generation_utils.BeamSearchDecoderOnlyOutput, torch.LongTensor][source]¶ Generates sequences for models with a language modeling head using beam search decoding.
- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytorch.LongTensor
of shape(1,)
.beam_scorer (
BeamScorer
) – An derived instance ofBeamScorer
that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation ofBeamScorer
should be read.logits_processor (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsProcessor
used to modify the prediction scores of the language modeling head applied at each generation step.stopping_criteria (
StoppingCriteriaList
, optional) – An instance ofStoppingCriteriaList
. List of instances of class derived fromStoppingCriteria
used to tell if the generation loop should stop.max_length (
int
, optional, defaults to 20) – DEPRECATED. Uselogits_processor
orstopping_criteria
directly to cap the number of generated tokens. The maximum length of the sequence to be generated.pad_token_id (
int
, optional) – The id of the padding token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.output_attentions (
bool
, optional, defaults to False) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more details.output_hidden_states (
bool
, optional, defaults to False) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more details.output_scores (
bool
, optional, defaults to False) – Whether or not to return the prediction scores. Seescores
under returned tensors for more details.return_dict_in_generate (
bool
, optional, defaults to False) – Whether or not to return aModelOutput
instead of a plain tuple.synced_gpus (
bool
, optional, defaults toFalse
) – Whether to continue running the while loop until max_length (needed for ZeRO stage 3)model_kwargs – Additional model specific kwargs will be forwarded to the
forward
function of the model. If model is an encoder-decoder model the kwargs should includeencoder_outputs
.
- Returns
generation_utilsBeamSearchDecoderOnlyOutput
,BeamSearchEncoderDecoderOutput
or obj:torch.LongTensor: Atorch.LongTensor
containing the generated tokens (default behaviour) or aBeamSearchDecoderOnlyOutput
ifmodel.config.is_encoder_decoder=False
andreturn_dict_in_generate=True
or aBeamSearchEncoderDecoderOutput
ifmodel.config.is_encoder_decoder=True
.
Examples:
>>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... BeamSearchScorer, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> # lets run beam search using 3 beams >>> num_beams = 3 >>> # define decoder start token ids >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> # add encoder_outputs to model keyword arguments >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()(encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True) ... } >>> # instantiate beam scorer >>> beam_scorer = BeamSearchScorer( ... batch_size=1, ... num_beams=num_beams, ... device=model.device, ... ) >>> # instantiate logits processors >>> logits_processor = LogitsProcessorList([ ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ... ]) >>> outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
-
generate
(input_ids: Optional[torch.LongTensor] = None, max_length: Optional[int] = None, min_length: Optional[int] = None, do_sample: Optional[bool] = None, early_stopping: Optional[bool] = None, num_beams: Optional[int] = None, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, repetition_penalty: Optional[float] = None, bad_words_ids: Optional[Iterable[int]] = None, bos_token_id: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, length_penalty: Optional[float] = None, no_repeat_ngram_size: Optional[int] = None, encoder_no_repeat_ngram_size: Optional[int] = None, num_return_sequences: Optional[int] = None, max_time: Optional[float] = None, max_new_tokens: Optional[int] = None, decoder_start_token_id: Optional[int] = None, use_cache: Optional[bool] = None, num_beam_groups: Optional[int] = None, diversity_penalty: Optional[float] = None, prefix_allowed_tokens_fn: Optional[Callable[[int, torch.Tensor], List[int]]] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, forced_bos_token_id: Optional[int] = None, forced_eos_token_id: Optional[int] = None, remove_invalid_values: Optional[bool] = None, synced_gpus: Optional[bool] = None, **model_kwargs) → Union[transformers.generation_utils.GreedySearchEncoderDecoderOutput, transformers.generation_utils.GreedySearchDecoderOnlyOutput, transformers.generation_utils.SampleEncoderDecoderOutput, transformers.generation_utils.SampleDecoderOnlyOutput, transformers.generation_utils.BeamSearchEncoderDecoderOutput, transformers.generation_utils.BeamSearchDecoderOnlyOutput, transformers.generation_utils.BeamSampleEncoderDecoderOutput, transformers.generation_utils.BeamSampleDecoderOnlyOutput, torch.LongTensor][source]¶ Generates sequences for models with a language modeling head. The method currently supports greedy decoding, multinomial sampling, beam-search decoding, and beam-search multinomial sampling.
Apart from
input_ids
andattention_mask
, all the arguments below will default to the value of the attribute of the same name inside thePretrainedConfig
of the model. The default values indicated are the default values of those config.Most of these parameters are explained in more detail in this blog post.
- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytorch.LongTensor
of shape(1,)
.max_length (
int
, optional, defaults tomodel.config.max_length
) – The maximum length of the sequence to be generated.max_new_tokens (
int
, optional, defaults to None) – The maximum numbers of tokens to generate, ignore the current number of tokens. Use eithermax_new_tokens
ormax_length
but not both, they serve the same purpose.min_length (
int
, optional, defaults to 10) – The minimum length of the sequence to be generated.do_sample (
bool
, optional, defaults toFalse
) – Whether or not to use sampling ; use greedy decoding otherwise.early_stopping (
bool
, optional, defaults toFalse
) – Whether to stop the beam search when at leastnum_beams
sentences are finished per batch or not.num_beams (
int
, optional, defaults to 1) – Number of beams for beam search. 1 means no beam search.temperature (
float
, optional, defaults to 1.0) – The value used to module the next token probabilities.top_k (
int
, optional, defaults to 50) – The number of highest probability vocabulary tokens to keep for top-k-filtering.top_p (
float
, optional, defaults to 1.0) – If set to float < 1, only the most probable tokens with probabilities that add up totop_p
or higher are kept for generation.repetition_penalty (
float
, optional, defaults to 1.0) – The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.pad_token_id (
int
, optional) – The id of the padding token.bos_token_id (
int
, optional) – The id of the beginning-of-sequence token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.length_penalty (
float
, optional, defaults to 1.0) – Exponential penalty to the length. 1.0 means no penalty. Set to values < 1.0 in order to encourage the model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer sequences.no_repeat_ngram_size (
int
, optional, defaults to 0) – If set to int > 0, all ngrams of that size can only occur once.encoder_no_repeat_ngram_size (
int
, optional, defaults to 0) – If set to int > 0, all ngrams of that size that occur in theencoder_input_ids
cannot occur in thedecoder_input_ids
.bad_words_ids (
List[List[int]]
, optional) – List of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, usetokenizer(bad_word, add_prefix_space=True).input_ids
.num_return_sequences (
int
, optional, defaults to 1) – The number of independently computed returned sequences for each element in the batch.max_time (
float
, optional, defaults to None) – The maximum amount of time you allow the computation to run for in seconds. generation will still finish the current pass after allocated time has been passed.attention_mask (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – Mask to avoid performing attention on padding token indices. Mask values are in[0, 1]
, 1 for tokens that are not masked, and 0 for masked tokens. If not provided, will default to a tensor the same shape asinput_ids
that masks the pad token. What are attention masks?decoder_start_token_id (
int
, optional) – If an encoder-decoder model starts decoding with a different token than bos, the id of that token.use_cache – (
bool
, optional, defaults toTrue
): Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding.num_beam_groups (
int
, optional, defaults to 1) – Number of groups to dividenum_beams
into in order to ensure diversity among different groups of beams. this paper for more details.diversity_penalty (
float
, optional, defaults to 0.0) – This value is subtracted from a beam’s score if it generates a token same as any beam from other group at a particular time. Note thatdiversity_penalty
is only effective ifgroup beam search
is enabled.prefix_allowed_tokens_fn – (
Callable[[int, torch.Tensor], List[int]]
, optional): If provided, this function constraints the beam search to allowed tokens only at each step. If not provided no constraint is applied. This function takes 2 arguments: the batch IDbatch_id
andinput_ids
. It has to return a list with the allowed tokens for the next generation step conditioned on the batch IDbatch_id
and the previously generated tokensinputs_ids
. This argument is useful for constrained generation conditioned on the prefix, as described in Autoregressive Entity Retrieval.output_attentions (
bool
, optional, defaults to False) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more details.output_hidden_states (
bool
, optional, defaults to False) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more details.output_scores (
bool
, optional, defaults to False) – Whether or not to return the prediction scores. Seescores
under returned tensors for more details.return_dict_in_generate (
bool
, optional, defaults to False) – Whether or not to return aModelOutput
instead of a plain tuple.forced_bos_token_id (
int
, optional) – The id of the token to force as the first generated token after thedecoder_start_token_id
. Useful for multilingual models like mBART where the first generated token needs to be the target language token.forced_eos_token_id (
int
, optional) – The id of the token to force as the last generated token whenmax_length
is reached.remove_invalid_values (
bool
, optional) – Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that usingremove_invalid_values
can slow down generation.synced_gpus (
bool
, optional, defaults toFalse
) – Whether to continue running the while loop until max_length (needed for ZeRO stage 3)model_kwargs – Additional model specific kwargs will be forwarded to the
forward
function of the model. If the model is an encoder-decoder model, encoder specific kwargs should not be prefixed and decoder specific kwargs should be prefixed with decoder_.
- Returns
A
ModelOutput
(ifreturn_dict_in_generate=True
or whenconfig.return_dict_in_generate=True
) or atorch.FloatTensor
.If the model is not an encoder-decoder model (
model.config.is_encoder_decoder=False
), the possibleModelOutput
types are:If the model is an encoder-decoder model (
model.config.is_encoder_decoder=True
), the possibleModelOutput
types are:- Return type
ModelOutput
ortorch.LongTensor
- Examples::
>>> from transformers import AutoTokenizer, AutoModelForCausalLM, AutoModelForSeq2SeqLM
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") >>> # do greedy decoding without providing a prompt >>> outputs = model.generate(max_length=40) >>> print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
>>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> document = ( ... "at least two people were killed in a suspected bomb attack on a passenger bus " ... "in the strife-torn southern philippines on monday , the military said." ... ) >>> # encode input context >>> input_ids = tokenizer(document, return_tensors="pt").input_ids >>> # generate 3 independent sequences using beam search decoding (5 beams) >>> # with T5 encoder-decoder model conditioned on short news article. >>> outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = AutoModelForCausalLM.from_pretrained("distilgpt2") >>> input_context = "The dog" >>> # encode input context >>> input_ids = tokenizer(input_context, return_tensors="pt").input_ids >>> # generate 3 candidates using sampling >>> outputs = model.generate(input_ids=input_ids, max_length=20, num_return_sequences=3, do_sample=True) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
>>> tokenizer = AutoTokenizer.from_pretrained("ctrl") >>> model = AutoModelForCausalLM.from_pretrained("ctrl") >>> # "Legal" is one of the control codes for ctrl >>> input_context = "Legal My neighbor is" >>> # encode input context >>> input_ids = tokenizer(input_context, return_tensors="pt").input_ids >>> outputs = model.generate(input_ids=input_ids, max_length=20, repetition_penalty=1.2) >>> print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> input_context = "My cute dog" >>> # get tokens of words that should not be generated >>> bad_words_ids = [tokenizer(bad_word, add_prefix_space=True).input_ids for bad_word in ["idiot", "stupid", "shut up"]] >>> # encode input context >>> input_ids = tokenizer(input_context, return_tensors="pt").input_ids >>> # generate sequences without allowing bad_words to be generated >>> outputs = model.generate(input_ids=input_ids, max_length=20, do_sample=True, bad_words_ids=bad_words_ids) >>> print("Generated:", tokenizer.decode(outputs[0], skip_special_tokens=True))
-
greedy_search
(input_ids: torch.LongTensor, logits_processor: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation_stopping_criteria.StoppingCriteriaList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = None, **model_kwargs) → Union[transformers.generation_utils.GreedySearchEncoderDecoderOutput, transformers.generation_utils.GreedySearchDecoderOnlyOutput, torch.LongTensor][source]¶ Generates sequences for models with a language modeling head using greedy decoding.
- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytorch.LongTensor
of shape(1,)
.logits_processor (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsProcessor
used to modify the prediction scores of the language modeling head applied at each generation step.stopping_criteria (
StoppingCriteriaList
, optional) – An instance ofStoppingCriteriaList
. List of instances of class derived fromStoppingCriteria
used to tell if the generation loop should stop.max_length (
int
, optional, defaults to 20) – DEPRECATED. Uselogits_processor
orstopping_criteria
directly to cap the number of generated tokens. The maximum length of the sequence to be generated.pad_token_id (
int
, optional) – The id of the padding token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.output_attentions (
bool
, optional, defaults to False) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more details.output_hidden_states (
bool
, optional, defaults to False) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more details.output_scores (
bool
, optional, defaults to False) – Whether or not to return the prediction scores. Seescores
under returned tensors for more details.return_dict_in_generate (
bool
, optional, defaults to False) – Whether or not to return aModelOutput
instead of a plain tuple.synced_gpus (
bool
, optional, defaults toFalse
) – Whether to continue running the while loop until max_length (needed for ZeRO stage 3)model_kwargs – Additional model specific keyword arguments will be forwarded to the
forward
function of the model. If model is an encoder-decoder model the kwargs should includeencoder_outputs
.
- Returns
GreedySearchDecoderOnlyOutput
,GreedySearchEncoderDecoderOutput
or obj:torch.LongTensor: Atorch.LongTensor
containing the generated tokens (default behaviour) or aGreedySearchDecoderOnlyOutput
ifmodel.config.is_encoder_decoder=False
andreturn_dict_in_generate=True
or aGreedySearchEncoderDecoderOutput
ifmodel.config.is_encoder_decoder=True
.
Examples:
>>> from transformers import ( ... AutoTokenizer, ... AutoModelForCausalLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token >>> model.config.pad_token_id = model.config.eos_token_id >>> input_prompt = "Today is a beautiful day, and" >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids >>> # instantiate logits processors >>> logits_processor = LogitsProcessorList([ ... MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id), ... ]) >>> outputs = model.greedy_search(input_ids, logits_processor=logits_processor) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
-
group_beam_search
(input_ids: torch.LongTensor, beam_scorer: transformers.generation_beam_search.BeamScorer, logits_processor: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation_stopping_criteria.StoppingCriteriaList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = None, **model_kwargs)[source]¶ Generates sequences for models with a language modeling head using beam search decoding.
- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytorch.LongTensor
of shape(1,)
.beam_scorer (
BeamScorer
) – An derived instance ofBeamScorer
that defines how beam hypotheses are constructed, stored and sorted during generation. For more information, the documentation ofBeamScorer
should be read.logits_processor (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsProcessor
used to modify the prediction scores of the language modeling head applied at each generation step.stopping_criteria (
StoppingCriteriaList
, optional) – An instance ofStoppingCriteriaList
. List of instances of class derived fromStoppingCriteria
used to tell if the generation loop should stop.max_length (
int
, optional, defaults to 20) – DEPRECATED. Uselogits_processor
orstopping_criteria
directly to cap the number of generated tokens. The maximum length of the sequence to be generated.pad_token_id (
int
, optional) – The id of the padding token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.output_attentions (
bool
, optional, defaults to False) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more details.output_hidden_states (
bool
, optional, defaults to False) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more details.output_scores (
bool
, optional, defaults to False) – Whether or not to return the prediction scores. Seescores
under returned tensors for more details.return_dict_in_generate (
bool
, optional, defaults to False) – Whether or not to return aModelOutput
instead of a plain tuple.synced_gpus (
bool
, optional, defaults toFalse
) – Whether to continue running the while loop until max_length (needed for ZeRO stage 3)model_kwargs – Additional model specific kwargs that will be forwarded to the
forward
function of the model. If model is an encoder-decoder model the kwargs should includeencoder_outputs
.
- Returns
BeamSearchDecoderOnlyOutput
,BeamSearchEncoderDecoderOutput
or obj:torch.LongTensor: Atorch.LongTensor
containing the generated tokens (default behaviour) or aBeamSearchDecoderOnlyOutput
ifBeamSearchDecoderOnlyOutput
ifmodel.config.is_encoder_decoder=False
andreturn_dict_in_generate=True
or aBeamSearchEncoderDecoderOutput
ifmodel.config.is_encoder_decoder=True
.
Examples:
>>> from transformers import ( ... AutoTokenizer, ... AutoModelForSeq2SeqLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... HammingDiversityLogitsProcessor, ... BeamSearchScorer, ... ) >>> import torch >>> tokenizer = AutoTokenizer.from_pretrained("t5-base") >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> encoder_input_str = "translate English to German: How old are you?" >>> encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids >>> # lets run diverse beam search using 6 beams >>> num_beams = 6 >>> # define decoder start token ids >>> input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long) >>> input_ids = input_ids * model.config.decoder_start_token_id >>> # add encoder_outputs to model keyword arguments >>> model_kwargs = { ... "encoder_outputs": model.get_encoder()(encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True) ... } >>> # instantiate beam scorer >>> beam_scorer = BeamSearchScorer( ... batch_size=1, ... max_length=model.config.max_length, ... num_beams=num_beams, ... device=model.device, ... num_beam_groups=3 ... ) >>> # instantiate logits processors >>> logits_processor = LogitsProcessorList([ ... HammingDiversityLogitsProcessor(5.5, num_beams=6, num_beam_groups=3), ... MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id), ... ]) >>> outputs = model.group_beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
-
prepare_inputs_for_generation
(input_ids: torch.LongTensor, **kwargs) → Dict[str, Any][source]¶ Implement in subclasses of
PreTrainedModel
for custom behavior to prepare inputs in the generate method.
-
sample
(input_ids: torch.LongTensor, logits_processor: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, stopping_criteria: Optional[transformers.generation_stopping_criteria.StoppingCriteriaList] = None, logits_warper: Optional[transformers.generation_logits_process.LogitsProcessorList] = None, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, output_attentions: Optional[bool] = None, output_hidden_states: Optional[bool] = None, output_scores: Optional[bool] = None, return_dict_in_generate: Optional[bool] = None, synced_gpus: Optional[bool] = None, **model_kwargs) → Union[transformers.generation_utils.SampleEncoderDecoderOutput, transformers.generation_utils.SampleDecoderOnlyOutput, torch.LongTensor][source]¶ Generates sequences for models with a language modeling head using multinomial sampling.
- Parameters
input_ids (
torch.LongTensor
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytorch.LongTensor
of shape(1,)
.logits_processor (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsProcessor
used to modify the prediction scores of the language modeling head applied at each generation step.stopping_criteria (
StoppingCriteriaList
, optional) – An instance ofStoppingCriteriaList
. List of instances of class derived fromStoppingCriteria
used to tell if the generation loop should stop.logits_warper (
LogitsProcessorList
, optional) – An instance ofLogitsProcessorList
. List of instances of class derived fromLogitsWarper
used to warp the prediction score distribution of the language modeling head applied before multinomial sampling at each generation step.max_length (
int
, optional, defaults to 20) – DEPRECATED. Uselogits_processor
orstopping_criteria
directly to cap the number of generated tokens. The maximum length of the sequence to be generated.pad_token_id (
int
, optional) – The id of the padding token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.output_attentions (
bool
, optional, defaults to False) – Whether or not to return the attentions tensors of all attention layers. Seeattentions
under returned tensors for more details.output_hidden_states (
bool
, optional, defaults to False) – Whether or not to return the hidden states of all layers. Seehidden_states
under returned tensors for more details.output_scores (
bool
, optional, defaults to False) – Whether or not to return the prediction scores. Seescores
under returned tensors for more details.return_dict_in_generate (
bool
, optional, defaults to False) – Whether or not to return aModelOutput
instead of a plain tuple.synced_gpus (
bool
, optional, defaults toFalse
) – Whether to continue running the while loop until max_length (needed for ZeRO stage 3)model_kwargs – Additional model specific kwargs will be forwarded to the
forward
function of the model. If model is an encoder-decoder model the kwargs should includeencoder_outputs
.
- Returns
SampleDecoderOnlyOutput
,SampleEncoderDecoderOutput
or obj:torch.LongTensor: Atorch.LongTensor
containing the generated tokens (default behaviour) or aSampleDecoderOnlyOutput
ifmodel.config.is_encoder_decoder=False
andreturn_dict_in_generate=True
or aSampleEncoderDecoderOutput
ifmodel.config.is_encoder_decoder=True
.
Examples:
>>> from transformers import ( ... AutoTokenizer, ... AutoModelForCausalLM, ... LogitsProcessorList, ... MinLengthLogitsProcessor, ... TopKLogitsWarper, ... TemperatureLogitsWarper, ... ) >>> tokenizer = AutoTokenizer.from_pretrained("gpt2") >>> model = AutoModelForCausalLM.from_pretrained("gpt2") >>> # set pad_token_id to eos_token_id because GPT2 does not have a EOS token >>> model.config.pad_token_id = model.config.eos_token_id >>> input_prompt = "Today is a beautiful day, and" >>> input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids >>> # instantiate logits processors >>> logits_processor = LogitsProcessorList([ ... MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id), ... ]) >>> # instantiate logits processors >>> logits_warper = LogitsProcessorList([ ... TopKLogitsWarper(50), ... TemperatureLogitsWarper(0.7), ... ]) >>> outputs = model.sample(input_ids, logits_processor=logits_processor, logits_warper=logits_warper) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
-
-
class
transformers.generation_tf_utils.
TFGenerationMixin
[source]¶ A class containing all of the functions supporting generation, to be used as a mixin in
TFPreTrainedModel
.-
adjust_logits_during_generation
(logits, cur_len, max_length, forced_bos_token_id, forced_eos_token_id, **kwargs)[source]¶ Implement in subclasses of
PreTrainedModel
for custom behavior to adjust the logits in the generate method.
-
generate
(input_ids=None, max_length=None, min_length=None, do_sample=None, early_stopping=None, num_beams=None, temperature=None, top_k=None, top_p=None, repetition_penalty=None, bad_words_ids=None, bos_token_id=None, pad_token_id=None, eos_token_id=None, length_penalty=None, no_repeat_ngram_size=None, num_return_sequences=None, attention_mask=None, decoder_start_token_id=None, use_cache=None, forced_bos_token_id=None, forced_eos_token_id=None)[source]¶ Generates sequences for models with a language modeling head. The method currently supports greedy decoding, beam-search decoding, sampling with temperature, sampling with top-k or nucleus sampling.
Adapted in part from Facebook’s XLM beam search code.
Apart from
input_ids
andattention_mask
, all the arguments below will default to the value of the attribute of the same name inside thePretrainedConfig
of the model. The default values indicated are the default values of those config.Most of these parameters are explained in more detail in this blog post.
- Parameters
input_ids (
tf.Tensor
ofdtype=tf.int32
and shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation. IfNone
the method initializes it as an emptytf.Tensor
of shape(1,)
.max_length (
int
, optional, defaults to 20) – The maximum length of the sequence to be generated.min_length (
int
, optional, defaults to 10) – The minimum length of the sequence to be generated.do_sample (
bool
, optional, defaults toFalse
) – Whether or not to use sampling ; use greedy decoding otherwise.early_stopping (
bool
, optional, defaults toFalse
) – Whether to stop the beam search when at leastnum_beams
sentences are finished per batch or not.num_beams (
int
, optional, defaults to 1) – Number of beams for beam search. 1 means no beam search.temperature (
float
, optional, defaults to 1.0) – The value used to module the next token probabilities.top_k (
int
, optional, defaults to 50) – The number of highest probability vocabulary tokens to keep for top-k-filtering.top_p (
float
, optional, defaults to 1.0) – If set to float < 1, only the most probable tokens with probabilities that add up totop_p
or higher are kept for generation.repetition_penalty (
float
, optional, defaults to 1.0) – The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details.pad_token_id (
int
, optional) – The id of the padding token.bos_token_id (
int
, optional) – The id of the beginning-of-sequence token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.length_penalty (
float
, optional, defaults to 1.0) –Exponential penalty to the length. 1.0 means no penalty.
Set to values < 1.0 in order to encourage the model to generate shorter sequences, to a value > 1.0 in order to encourage the model to produce longer sequences.
no_repeat_ngram_size (
int
, optional, defaults to 0) – If set to int > 0, all ngrams of that size can only occur once.bad_words_ids (
List[int]
, optional) – List of token ids that are not allowed to be generated. In order to get the tokens of the words that should not appear in the generated text, usetokenizer.encode(bad_word, add_prefix_space=True)
.num_return_sequences (
int
, optional, defaults to 1) – The number of independently computed returned sequences for each element in the batch.attention_mask (
tf.Tensor
ofdtype=tf.int32
and shape(batch_size, sequence_length)
, optional) –Mask to avoid performing attention on padding token indices. Mask values are in
[0, 1]
, 1 for tokens that are not masked, and 0 for masked tokens.If not provided, will default to a tensor the same shape as
input_ids
that masks the pad token.decoder_start_token_id (
int
, optional) – If an encoder-decoder model starts decoding with a different token than bos, the id of that token.use_cache – (
bool
, optional, defaults toTrue
): Whether or not the model should use the past last key/values attentions (if applicable to the model) to speed up decoding.forced_bos_token_id (
int
, optional) – The id of the token to force as the first generated token after thedecoder_start_token_id
. Useful for multilingual models like mBART where the first generated token needs to be the target language token.forced_eos_token_id (
int
, optional) – The id of the token to force as the last generated token whenmax_length
is reached.model_specific_kwargs – Additional model specific kwargs will be forwarded to the
forward
function of the model.
- Returns
obj:(batch_size * num_return_sequences, sequence_length): The generated sequences. The second dimension (sequence_length) is either equal to
max_length
or shorter if all batches finished early due to theeos_token_id
.- Return type
tf.Tensor
ofdtype=tf.int32
and shape
Examples:
tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer model = TFAutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from huggingface.co and cache. outputs = model.generate(max_length=40) # do greedy decoding print(f'Generated: {tokenizer.decode(outputs[0], skip_special_tokens=True)}') tokenizer = AutoTokenizer.from_pretrained('openai-gpt') # Initialize tokenizer model = TFAutoModelWithLMHead.from_pretrained('openai-gpt') # Download model and configuration from huggingface.co and cache. input_context = 'The dog' input_ids = tokenizer.encode(input_context, return_tensors='tf') # encode input context outputs = model.generate(input_ids=input_ids, num_beams=5, num_return_sequences=3, temperature=1.5) # generate 3 independent sequences using beam search decoding (5 beams) with sampling from initial context 'The dog' for i in range(3): # 3 output sequences were generated print(f'Generated {i}: {tokenizer.decode(outputs[i], skip_special_tokens=True)}') tokenizer = AutoTokenizer.from_pretrained('distilgpt2') # Initialize tokenizer model = TFAutoModelWithLMHead.from_pretrained('distilgpt2') # Download model and configuration from huggingface.co and cache. input_context = 'The dog' input_ids = tokenizer.encode(input_context, return_tensors='tf') # encode input context outputs = model.generate(input_ids=input_ids, max_length=40, temperature=0.7, num_return_sequences=3, do_sample=True) # generate 3 candidates using sampling for i in range(3): # 3 output sequences were generated print(f'Generated {i}: {tokenizer.decode(outputs[i], skip_special_tokens=True)}') tokenizer = AutoTokenizer.from_pretrained('ctrl') # Initialize tokenizer model = TFAutoModelWithLMHead.from_pretrained('ctrl') # Download model and configuration from huggingface.co and cache. input_context = 'Legal My neighbor is' # "Legal" is one of the control codes for ctrl input_ids = tokenizer.encode(input_context, return_tensors='tf') # encode input context outputs = model.generate(input_ids=input_ids, max_length=50, temperature=0.7, repetition_penalty=1.2) # generate sequences print(f'Generated: {tokenizer.decode(outputs[0], skip_special_tokens=True)}') tokenizer = AutoTokenizer.from_pretrained('gpt2') # Initialize tokenizer model = TFAutoModelWithLMHead.from_pretrained('gpt2') # Download model and configuration from huggingface.co and cache. input_context = 'My cute dog' bad_words_ids = [tokenizer.encode(bad_word, add_prefix_space=True) for bad_word in ['idiot', 'stupid', 'shut up']] input_ids = tokenizer.encode(input_context, return_tensors='tf') # encode input context outputs = model.generate(input_ids=input_ids, max_length=100, do_sample=True, bad_words_ids=bad_words_ids) # generate sequences without allowing bad_words to be generated
-
prepare_inputs_for_generation
(inputs, **kwargs)[source]¶ Implement in subclasses of
TFPreTrainedModel
for custom behavior to prepare inputs in the generate method.
-
-
class
transformers.generation_flax_utils.
FlaxGenerationMixin
[source]¶ A class containing all of the functions supporting generation, to be used as a mixin in
FlaxPreTrainedModel
.-
generate
(input_ids: jaxlib.xla_extension.DeviceArray, max_length: Optional[int] = None, pad_token_id: Optional[int] = None, bos_token_id: Optional[int] = None, eos_token_id: Optional[int] = None, decoder_start_token_id: Optional[int] = None, do_sample: Optional[bool] = None, prng_key: Optional[jaxlib.xla_extension.DeviceArray] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, temperature: Optional[float] = None, num_beams: Optional[int] = None, no_repeat_ngram_size: Optional[int] = None, min_length: Optional[int] = None, forced_bos_token_id: Optional[int] = None, forced_eos_token_id: Optional[int] = None, length_penalty: Optional[float] = None, early_stopping: Optional[bool] = None, trace: bool = True, params: Optional[Dict[str, jaxlib.xla_extension.DeviceArray]] = None, **model_kwargs)[source]¶ Generates sequences for models with a language modeling head. The method currently supports greedy decoding, and, multinomial sampling.
Apart from
input_ids
, all the arguments below will default to the value of the attribute of the same name inside thePretrainedConfig
of the model. The default values indicated are the default values of those config.Most of these parameters are explained in more detail in this blog post.
- Parameters
input_ids (
jax_xla.DeviceArray
of shape(batch_size, sequence_length)
, optional) – The sequence used as a prompt for the generation.max_length (
int
, optional, defaults to 20) – The maximum length of the sequence to be generated.do_sample (
bool
, optional, defaults toFalse
) – Whether or not to use sampling ; use greedy decoding otherwise.temperature (
float
, optional, defaults to 1.0) – The value used to module the next token probabilities.top_k (
int
, optional, defaults to 50) – The number of highest probability vocabulary tokens to keep for top-k-filtering.top_p (
float
, optional, defaults to 1.0) – If set to float < 1, only the most probable tokens with probabilities that add up totop_p
or higher are kept for generation.pad_token_id (
int
, optional) – The id of the padding token.bos_token_id (
int
, optional) – The id of the beginning-of-sequence token.eos_token_id (
int
, optional) – The id of the end-of-sequence token.num_beams (
int
, optional, defaults to 1) – Number of beams for beam search. 1 means no beam search.decoder_start_token_id (
int
, optional) – If an encoder-decoder model starts decoding with a different token than bos, the id of that token.trace (
bool
, optional, defaults toTrue
) – Whether to trace generation. Settingtrace=False
should only be used for debugging and will lead to a considerably slower runtime.params (
Dict[str, jax_xla.DeviceArray]
, optional) – Optionally the model parameters can be passed. Can be useful for parallelized generation.model_kwargs – Additional model specific kwargs will be forwarded to the
forward
function of the model.
- Returns
- Examples::
>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("distilgpt2") >>> model = FlaxAutoModelForCausalLM.from_pretrained("distilgpt2") >>> input_context = "The dog" >>> # encode input context >>> input_ids = tokenizer(input_context, return_tensors="jax").input_ids >>> # generate candidates using sampling >>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True) >>> print("Generated:", tokenizer.batch_decode(outputs, skip_special_tokens=True))
-
Pushing to the Hub¶
-
class
transformers.file_utils.
PushToHubMixin
[source]¶ A Mixin containing the functionality to push a model or tokenizer to the hub.
-
push_to_hub
(repo_name: Optional[str] = None, repo_url: Optional[str] = None, commit_message: Optional[str] = None, organization: Optional[str] = None, private: bool = None, use_auth_token: Optional[Union[bool, str]] = None) → str[source]¶ Upload model checkpoint or tokenizer files to the 🤗 model hub.
- Parameters
repo_name (
str
, optional) – Repository name for your model or tokenizer in the hub. If not specified, the repository name will be the stem ofsave_directory
.repo_url (
str
, optional) – Specify this in case you want to push to an existing repository in the hub. If unspecified, a new repository will be created in your namespace (unless you specify anorganization
) withrepo_name
.commit_message (
str
, optional) – Message to commit while pushing. Will default to"add config"
,"add tokenizer"
or"add model"
depending on the type of the class.organization (
str
, optional) – Organization in which you want to push your model or tokenizer (you must be a member of this organization).private (
bool
, optional) – Whether or not the repository created should be private (requires a paying subscription).use_auth_token (
bool
orstr
, optional) – The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored inhuggingface
). Will default toTrue
ifrepo_url
is not specified.
- Returns
The url of the commit of your model in the given repository.
-