Transformers documentation

Configuration

You are viewing v4.44.2 version. A newer version v4.46.0 is available.
Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

Configuration

The base class PretrainedConfig implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace’s AWS S3 repository).

Each derived config class implements model specific attributes. Common attributes present in all config classes are: hidden_size, num_attention_heads, and num_hidden_layers. Text models further implement: vocab_size.

PretrainedConfig

class transformers.PretrainedConfig

< >

( **kwargs )

Parameters

  • name_or_path (str, optional, defaults to "") — Store the string that was passed to PreTrainedModel.from_pretrained() or TFPreTrainedModel.from_pretrained() as pretrained_model_name_or_path if the configuration was created with such a method.
  • output_hidden_states (bool, optional, defaults to False) — Whether or not the model should return all hidden-states.
  • output_attentions (bool, optional, defaults to False) — Whether or not the model should returns all attentions.
  • return_dict (bool, optional, defaults to True) — Whether or not the model should return a ModelOutput instead of a plain tuple.
  • is_encoder_decoder (bool, optional, defaults to False) — Whether the model is used as an encoder/decoder or not.
  • is_decoder (bool, optional, defaults to False) — Whether the model is used as decoder or not (in which case it’s used as an encoder).
  • cross_attention_hidden_size** (bool, optional) — The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder setting and the cross-attention hidden dimension differs from self.config.hidden_size.
  • add_cross_attention (bool, optional, defaults to False) — Whether cross-attention layers should be added to the model. Note, this option is only relevant for models that can be used as decoder models within the EncoderDecoderModel class, which consists of all models in AUTO_MODELS_FOR_CAUSAL_LM.
  • tie_encoder_decoder (bool, optional, defaults to False) — Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder and decoder model to have the exact same parameter names.
  • prune_heads (Dict[int, List[int]], optional, defaults to {}) — Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of heads to prune in said layer.

    For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.

  • chunk_size_feed_forward (int, optional, defaults to 0) — The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work?.

Parameters for sequence generation

  • max_length (int, optional, defaults to 20) — Maximum length that will be used by default in the generate method of the model.
  • min_length (int, optional, defaults to 0) — Minimum length that will be used by default in the generate method of the model.
  • do_sample (bool, optional, defaults to False) — Flag that will be used by default in the generate method of the model. Whether or not to use sampling ; use greedy decoding otherwise.
  • early_stopping (bool, optional, defaults to False) — Flag that will be used by default in the generate method of the model. Whether to stop the beam search when at least num_beams sentences are finished per batch or not.
  • num_beams (int, optional, defaults to 1) — Number of beams for beam search that will be used by default in the generate method of the model. 1 means no beam search.
  • num_beam_groups (int, optional, defaults to 1) — Number of groups to divide num_beams into in order to ensure diversity among different groups of beams that will be used by default in the generate method of the model. 1 means no group beam search.
  • diversity_penalty (float, optional, defaults to 0.0) — Value to control diversity for group beam search. that will be used by default in the generate method of the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs.
  • temperature (float, optional, defaults to 1.0) — The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive.
  • top_k (int, optional, defaults to 50) — Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in the generate method of the model.
  • top_p (float, optional, defaults to 1) — Value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.
  • typical_p (float, optional, defaults to 1) — Local typicality measures how similar the conditional probability of predicting a target token next is to the expected conditional probability of predicting a random token next, given the partial text already generated. If set to float < 1, the smallest set of the most locally typical tokens with probabilities that add up to typical_p or higher are kept for generation. See this paper for more details.
  • repetition_penalty (float, optional, defaults to 1) — Parameter for repetition penalty that will be used by default in the generate method of the model. 1.0 means no penalty.
  • length_penalty (float, optional, defaults to 1) — Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log likelihood of the sequence (i.e. negative), length_penalty > 0.0 promotes longer sequences, while length_penalty < 0.0 encourages shorter sequences.
  • no_repeat_ngram_size (int, optional, defaults to 0) — Value that will be used by default in the — generate method of the model for no_repeat_ngram_size. If set to int > 0, all ngrams of that size can only occur once.
  • encoder_no_repeat_ngram_size (int, optional, defaults to 0) — Value that will be used by — default in the generate method of the model for encoder_no_repeat_ngram_size. If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids.
  • bad_words_ids (List[int], optional) — List of token ids that are not allowed to be generated that will be used by default in the generate method of the model. In order to get the tokens of the words that should not appear in the generated text, use tokenizer.encode(bad_word, add_prefix_space=True).
  • num_return_sequences (int, optional, defaults to 1) — Number of independently computed returned sequences for each element in the batch that will be used by default in the generate method of the model.
  • output_scores (bool, optional, defaults to False) — Whether the model should return the logits when used for generation.
  • return_dict_in_generate (bool, optional, defaults to False) — Whether the model should return a ModelOutput instead of a torch.LongTensor.
  • forced_bos_token_id (int, optional) — The id of the token to force as the first generated token after the decoder_start_token_id. Useful for multilingual models like mBART where the first generated token needs to be the target language token.
  • forced_eos_token_id (int, optional) — The id of the token to force as the last generated token when max_length is reached.
  • remove_invalid_values (bool, optional) — Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.

Parameters for fine-tuning tasks

  • architectures (List[str], optional) — Model architectures that can be used with the model pretrained weights.
  • finetuning_task (str, optional) — Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow or PyTorch) checkpoint.
  • id2label (Dict[int, str], optional) — A map from index (for instance prediction index, or target index) to label.
  • label2id (Dict[str, int], optional) — A map from label to index for the model.
  • num_labels (int, optional) — Number of labels to use in the last layer added to the model, typically for a classification task.
  • task_specific_params (Dict[str, Any], optional) — Additional keyword arguments to store for the current task.
  • problem_type (str, optional) — Problem type for XxxForSequenceClassification models. Can be one of "regression", "single_label_classification" or "multi_label_classification".

Parameters linked to the tokenizer

  • tokenizer_class (str, optional) — The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default).
  • prefix (str, optional) — A specific prompt that should be added at the beginning of each text before calling the model.
  • bos_token_id (int, optional) — The id of the beginning-of-stream token.
  • pad_token_id (int, optional) — The id of the padding token.
  • eos_token_id (int, optional) — The id of the end-of-stream token.
  • decoder_start_token_id (int, optional) — If an encoder-decoder model starts decoding with a different token than bos, the id of that token.
  • sep_token_id (int, optional) — The id of the separation token.

PyTorch specific parameters

  • torchscript (bool, optional, defaults to False) — Whether or not the model should be used with Torchscript.
  • tie_word_embeddings (bool, optional, defaults to True) — Whether the model’s input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer.
  • torch_dtype (str, optional) — The dtype of the weights. This attribute can be used to initialize the model to a non-default dtype (which is normally float32) and thus allow for optimal storage allocation. For example, if the saved model is float16, ideally we want to load it back using the minimal amount of memory needed to load float16 weights. Since the config object is stored in plain text, this attribute contains just the floating type string without the torch. prefix. For example, for torch.float16 `torch_dtype is the "float16" string.

    This attribute is currently not being used during model loading time, but this may change in the future versions. But we can already start preparing for the future by saving the dtype with save_pretrained.

TensorFlow specific parameters

  • use_bfloat16 (bool, optional, defaults to False) — Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models).
  • tf_legacy_loss (bool, optional, defaults to False) — Whether the model should use legacy TensorFlow losses. Legacy losses have variable output shapes and may not be XLA-compatible. This option is here for backward compatibility and will be removed in Transformers v5.

Base class for all configuration classes. Handles a few parameters common to all models’ configurations as well as methods for loading/downloading/saving configurations.

A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does not load the model weights. It only affects the model’s configuration.

Class attributes (overridden by derived classes):

  • model_type (str) — An identifier for the model type, serialized into the JSON file, and used to recreate the correct object in AutoConfig.
  • is_composition (bool) — Whether the config class is composed of multiple sub-configs. In this case the config has to be initialized from two or more configs of type PretrainedConfig like: EncoderDecoderConfig or ~RagConfig.
  • keys_to_ignore_at_inference (List[str]) — A list of keys to ignore by default when looking at dictionary outputs of the model during inference.
  • attribute_map (Dict[str, str]) — A dict that maps model specific attribute names to the standardized naming of attributes.

Common attributes (present in all subclasses):

  • vocab_size (int) — The number of tokens in the vocabulary, which is also the first dimension of the embeddings matrix (this attribute may be missing for models that don’t have a text modality like ViT).
  • hidden_size (int) — The hidden size of the model.
  • num_attention_heads (int) — The number of attention heads used in the multi-head attention layers of the model.
  • num_hidden_layers (int) — The number of blocks in the model.

push_to_hub

< >

( repo_id: str use_temp_dir: Optional = None commit_message: Optional = None private: Optional = None token: Union = None max_shard_size: Union = '5GB' create_pr: bool = False safe_serialization: bool = True revision: str = None commit_description: str = None tags: Optional = None **deprecated_kwargs )

Parameters

  • repo_id (str) — The name of the repository you want to push your config to. It should contain your organization name when pushing to a given organization.
  • use_temp_dir (bool, optional) — Whether or not to use a temporary directory to store the files saved before they are pushed to the Hub. Will default to True if there is no directory named like repo_id, False otherwise.
  • commit_message (str, optional) — Message to commit while pushing. Will default to "Upload config".
  • private (bool, optional) — Whether or not the repository created should be private.
  • token (bool or str, optional) — The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running huggingface-cli login (stored in ~/.huggingface). Will default to True if repo_url is not specified.
  • max_shard_size (int or str, optional, defaults to "5GB") — Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard will then be each of size lower than this size. If expressed as a string, needs to be digits followed by a unit (like "5MB"). We default it to "5GB" so that users can easily load models on free-tier Google Colab instances without any CPU OOM issues.
  • create_pr (bool, optional, defaults to False) — Whether or not to create a PR with the uploaded files or directly commit.
  • safe_serialization (bool, optional, defaults to True) — Whether or not to convert the model weights in safetensors format for safer serialization.
  • revision (str, optional) — Branch to push the uploaded files to.
  • commit_description (str, optional) — The description of the commit that will be created
  • tags (List[str], optional) — List of tags to push on the Hub.

Upload the configuration file to the 🤗 Model Hub.

Examples:

from transformers import AutoConfig

config = AutoConfig.from_pretrained("google-bert/bert-base-cased")

# Push the config to your namespace with the name "my-finetuned-bert".
config.push_to_hub("my-finetuned-bert")

# Push the config to an organization with the name "my-finetuned-bert".
config.push_to_hub("huggingface/my-finetuned-bert")

dict_torch_dtype_to_str

< >

( d: Dict )

Checks whether the passed dictionary and its nested dicts have a torch_dtype key and if it’s not None, converts torch.dtype to a string of just the type. For example, torch.float32 get converted into “float32” string, which can then be stored in the json format.

from_dict

< >

( config_dict: Dict **kwargs ) PretrainedConfig

Parameters

  • config_dict (Dict[str, Any]) — Dictionary that will be used to instantiate the configuration object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the get_config_dict() method.
  • kwargs (Dict[str, Any]) — Additional parameters from which to initialize the configuration object.

Returns

PretrainedConfig

The configuration object instantiated from those parameters.

Instantiates a PretrainedConfig from a Python dictionary of parameters.

from_json_file

< >

( json_file: Union ) PretrainedConfig

Parameters

  • json_file (str or os.PathLike) — Path to the JSON file containing the parameters.

Returns

PretrainedConfig

The configuration object instantiated from that JSON file.

Instantiates a PretrainedConfig from the path to a JSON file of parameters.

from_pretrained

< >

( pretrained_model_name_or_path: Union cache_dir: Union = None force_download: bool = False local_files_only: bool = False token: Union = None revision: str = 'main' **kwargs ) PretrainedConfig

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — This can be either:

    • a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co.
    • a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/.
    • a path or url to a saved configuration JSON file, e.g., ./my_model_directory/configuration.json.
  • cache_dir (str or os.PathLike, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.
  • force_download (bool, optional, defaults to False) — Whether or not to force to (re-)download the configuration files and override the cached versions if they exist. resume_download — Deprecated and ignored. All downloads are now resumed by default when possible. Will be removed in v5 of Transformers.
  • proxies (Dict[str, str], optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.
  • token (str or bool, optional) — The token to use as HTTP bearer authorization for remote files. If True, or not specified, will use the token generated when running huggingface-cli login (stored in ~/.huggingface).
  • revision (str, optional, defaults to "main") — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

    To test a pull request you made on the Hub, you can pass `revision=“refs/pr/“.

  • return_unused_kwargs (bool, optional, defaults to False) — If False, then this function returns just the final configuration object.

    If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored.

  • subfolder (str, optional, defaults to "") — In case the relevant files are located inside a subfolder of the model repo on huggingface.co, you can specify the folder name here.
  • kwargs (Dict[str, Any], optional) — The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter.

Returns

PretrainedConfig

The configuration object instantiated from this pretrained model.

Instantiate a PretrainedConfig (or a derived class) from a pretrained model configuration.

Examples:

# We can't instantiate directly the base class *PretrainedConfig* so let's show the examples on a
# derived class: BertConfig
config = BertConfig.from_pretrained(
    "google-bert/bert-base-uncased"
)  # Download configuration from huggingface.co and cache.
config = BertConfig.from_pretrained(
    "./test/saved_model/"
)  # E.g. config (or model) was saved using *save_pretrained('./test/saved_model/')*
config = BertConfig.from_pretrained("./test/saved_model/my_configuration.json")
config = BertConfig.from_pretrained("google-bert/bert-base-uncased", output_attentions=True, foo=False)
assert config.output_attentions == True
config, unused_kwargs = BertConfig.from_pretrained(
    "google-bert/bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
)
assert config.output_attentions == True
assert unused_kwargs == {"foo": False}

get_config_dict

< >

( pretrained_model_name_or_path: Union **kwargs ) Tuple[Dict, Dict]

Parameters

  • pretrained_model_name_or_path (str or os.PathLike) — The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.

Returns

Tuple[Dict, Dict]

The dictionary(ies) that will be used to instantiate the configuration object.

From a pretrained_model_name_or_path, resolve to a dictionary of parameters, to be used for instantiating a PretrainedConfig using from_dict.

register_for_auto_class

< >

( auto_class = 'AutoConfig' )

Parameters

  • auto_class (str or type, optional, defaults to "AutoConfig") — The auto class to register this new configuration with.

Register this class with a given auto class. This should only be used for custom configurations as the ones in the library are already mapped with AutoConfig.

This API is experimental and may have some slight breaking changes in the next releases.

save_pretrained

< >

( save_directory: Union push_to_hub: bool = False **kwargs )

Parameters

  • save_directory (str or os.PathLike) — Directory where the configuration JSON file will be saved (will be created if it does not exist).
  • push_to_hub (bool, optional, defaults to False) — Whether or not to push your model to the Hugging Face model hub after saving it. You can specify the repository you want to push to with repo_id (will default to the name of save_directory in your namespace).
  • kwargs (Dict[str, Any], optional) — Additional key word arguments passed along to the push_to_hub() method.

Save a configuration object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method.

to_dict

< >

( ) Dict[str, Any]

Returns

Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance.

Serializes this instance to a Python dictionary.

to_diff_dict

< >

( ) Dict[str, Any]

Returns

Dict[str, Any]

Dictionary of all the attributes that make up this configuration instance,

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

to_json_file

< >

( json_file_path: Union use_diff: bool = True )

Parameters

  • json_file_path (str or os.PathLike) — Path to the JSON file in which this configuration instance’s parameters will be saved.
  • use_diff (bool, optional, defaults to True) — If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON file.

Save this instance to a JSON file.

to_json_string

< >

( use_diff: bool = True ) str

Parameters

  • use_diff (bool, optional, defaults to True) — If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON string.

Returns

str

String containing all the attributes that make up this configuration instance in JSON format.

Serializes this instance to a JSON string.

update

< >

( config_dict: Dict )

Parameters

  • config_dict (Dict[str, Any]) — Dictionary of attributes that should be updated for this class.

Updates attributes of this class with attributes from config_dict.

update_from_string

< >

( update_str: str )

Parameters

  • update_str (str) — String with attributes that should be updated for this class.

Updates attributes of this class with attributes from update_str.

The expected format is ints, floats and strings as is, and for booleans use true or false. For example: “n_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index”

The keys to change have to already exist in the config object.

< > Update on GitHub