ConfigurationΒΆ

The base class PretrainedConfig implements the common methods for loading/saving a configuration either from a local file or directory, or from a pretrained model configuration provided by the library (downloaded from HuggingFace’s AWS S3 repository).

Each derived config class implements model specific attributes. Common attributes present in all config classes are: hidden_size, num_attention_heads, and num_hidden_layers. Text models further implement: vocab_size.

PretrainedConfigΒΆ

class transformers.PretrainedConfig(**kwargs)[source]ΒΆ

Base class for all configuration classes. Handles a few parameters common to all models’ configurations as well as methods for loading/downloading/saving configurations.

Note

A configuration file can be loaded and saved to disk. Loading the configuration file and using this file to initialize a model does not load the model weights. It only affects the model’s configuration.

Class attributes (overridden by derived classes)

  • model_type (str) – An identifier for the model type, serialized into the JSON file, and used to recreate the correct object in AutoConfig.

  • is_composition (bool) – Whether the config class is composed of multiple sub-configs. In this case the config has to be initialized from two or more configs of type PretrainedConfig like: EncoderDecoderConfig or RagConfig.

  • keys_to_ignore_at_inference (List[str]) – A list of keys to ignore by default when looking at dictionary outputs of the model during inference.

  • attribute_map (Dict[str, str]) – A dict that maps model specific attribute names to the standardized naming of attributes.

Common attributes (present in all subclasses)

  • vocab_size (int) – The number of tokens in the vocabulary, which is also the first dimension of the embeddings matrix (this attribute may be missing for models that don’t have a text modality like ViT).

  • hidden_size (int) – The hidden size of the model.

  • num_attention_heads (int) – The number of attention heads used in the multi-head attention layers of the model.

  • num_hidden_layers (int) – The number of blocks in the model.

Parameters
  • name_or_path (str, optional, defaults to "") – Store the string that was passed to from_pretrained() or from_pretrained() as pretrained_model_name_or_path if the configuration was created with such a method.

  • output_hidden_states (bool, optional, defaults to False) – Whether or not the model should return all hidden-states.

  • output_attentions (bool, optional, defaults to False) – Whether or not the model should returns all attentions.

  • return_dict (bool, optional, defaults to True) – Whether or not the model should return a ModelOutput instead of a plain tuple.

  • is_encoder_decoder (bool, optional, defaults to False) – Whether the model is used as an encoder/decoder or not.

  • is_decoder (bool, optional, defaults to False) – Whether the model is used as decoder or not (in which case it’s used as an encoder).

  • cross_attention_hidden_size (bool, optional) – The hidden size of the cross-attention layer in case the model is used as a decoder in an encoder-decoder setting and the cross-attention hidden dimension differs from self.config.hidden_size.

  • add_cross_attention (bool, optional, defaults to False) – Whether cross-attention layers should be added to the model. Note, this option is only relevant for models that can be used as decoder models within the :class:~transformers.EncoderDecoderModel class, which consists of all models in AUTO_MODELS_FOR_CAUSAL_LM.

  • tie_encoder_decoder (bool, optional, defaults to False) – Whether all encoder weights should be tied to their equivalent decoder weights. This requires the encoder and decoder model to have the exact same parameter names.

  • prune_heads (Dict[int, List[int]], optional, defaults to {}) –

    Pruned heads of the model. The keys are the selected layer indices and the associated values, the list of heads to prune in said layer.

    For instance {1: [0, 2], 2: [2, 3]} will prune heads 0 and 2 on layer 1 and heads 2 and 3 on layer 2.

  • chunk_size_feed_forward (int, optional, defaults to 0) – The chunk size of all feed forward layers in the residual attention blocks. A chunk size of 0 means that the feed forward layer is not chunked. A chunk size of n means that the feed forward layer processes n < sequence_length embeddings at a time. For more information on feed forward chunking, see How does Feed Forward Chunking work? .

Parameters for sequence generation

  • max_length (int, optional, defaults to 20) – Maximum length that will be used by default in the generate method of the model.

  • min_length (int, optional, defaults to 10) – Minimum length that will be used by default in the generate method of the model.

  • do_sample (bool, optional, defaults to False) – Flag that will be used by default in the generate method of the model. Whether or not to use sampling ; use greedy decoding otherwise.

  • early_stopping (bool, optional, defaults to False) – Flag that will be used by default in the generate method of the model. Whether to stop the beam search when at least num_beams sentences are finished per batch or not.

  • num_beams (int, optional, defaults to 1) – Number of beams for beam search that will be used by default in the generate method of the model. 1 means no beam search.

  • num_beam_groups (int, optional, defaults to 1) – Number of groups to divide num_beams into in order to ensure diversity among different groups of beams that will be used by default in the generate method of the model. 1 means no group beam search.

  • diversity_penalty (float, optional, defaults to 0.0) – Value to control diversity for group beam search. that will be used by default in the generate method of the model. 0 means no diversity penalty. The higher the penalty, the more diverse are the outputs.

  • temperature (float, optional, defaults to 1) – The value used to module the next token probabilities that will be used by default in the generate method of the model. Must be strictly positive.

  • top_k (int, optional, defaults to 50) – Number of highest probability vocabulary tokens to keep for top-k-filtering that will be used by default in the generate method of the model.

  • top_p (float, optional, defaults to 1) – Value that will be used by default in the generate method of the model for top_p. If set to float < 1, only the most probable tokens with probabilities that add up to top_p or higher are kept for generation.

  • repetition_penalty (float, optional, defaults to 1) – Parameter for repetition penalty that will be used by default in the generate method of the model. 1.0 means no penalty.

  • length_penalty (float, optional, defaults to 1) – Exponential penalty to the length that will be used by default in the generate method of the model.

  • no_repeat_ngram_size (int, optional, defaults to 0) – Value that will be used by default in the generate method of the model for no_repeat_ngram_size. If set to int > 0, all ngrams of that size can only occur once.

  • encoder_no_repeat_ngram_size (int, optional, defaults to 0) – Value that will be used by default in the generate method of the model for encoder_no_repeat_ngram_size. If set to int > 0, all ngrams of that size that occur in the encoder_input_ids cannot occur in the decoder_input_ids.

  • bad_words_ids (List[int], optional) – List of token ids that are not allowed to be generated that will be used by default in the generate method of the model. In order to get the tokens of the words that should not appear in the generated text, use tokenizer.encode(bad_word, add_prefix_space=True).

  • num_return_sequences (int, optional, defaults to 1) – Number of independently computed returned sequences for each element in the batch that will be used by default in the generate method of the model.

  • output_scores (bool, optional, defaults to False) – Whether the model should return the logits when used for generation

  • return_dict_in_generate (bool, optional, defaults to False) – Whether the model should return a ModelOutput instead of a torch.LongTensor

  • forced_bos_token_id (int, optional) – The id of the token to force as the first generated token after the decoder_start_token_id. Useful for multilingual models like mBART where the first generated token needs to be the target language token.

  • forced_eos_token_id (int, optional) – The id of the token to force as the last generated token when max_length is reached.

  • remove_invalid_values (bool, optional) – Whether to remove possible nan and inf outputs of the model to prevent the generation method to crash. Note that using remove_invalid_values can slow down generation.

Parameters for fine-tuning tasks

  • architectures (List[str], optional) – Model architectures that can be used with the model pretrained weights.

  • finetuning_task (str, optional) – Name of the task used to fine-tune the model. This can be used when converting from an original (TensorFlow or PyTorch) checkpoint.

  • id2label (Dict[int, str], optional) – A map from index (for instance prediction index, or target index) to label.

  • label2id (Dict[str, int], optional) – A map from label to index for the model.

  • num_labels (int, optional) – Number of labels to use in the last layer added to the model, typically for a classification task.

  • task_specific_params (Dict[str, Any], optional) – Additional keyword arguments to store for the current task.

  • problem_type (str, optional) – Problem type for XxxForSequenceClassification models. Can be one of ("regression", "single_label_classification", "multi_label_classification"). Please note that this parameter is only available in the following models: AlbertForSequenceClassification, BertForSequenceClassification, BigBirdForSequenceClassification, ConvBertForSequenceClassification, DistilBertForSequenceClassification, ElectraForSequenceClassification, FunnelForSequenceClassification, LongformerForSequenceClassification, MobileBertForSequenceClassification, ReformerForSequenceClassification, RobertaForSequenceClassification, SqueezeBertForSequenceClassification, XLMForSequenceClassification and XLNetForSequenceClassification.

Parameters linked to the tokenizer

  • tokenizer_class (str, optional) – The name of the associated tokenizer class to use (if none is set, will use the tokenizer associated to the model by default).

  • prefix (str, optional) – A specific prompt that should be added at the beginning of each text before calling the model.

  • bos_token_id (int, optional)) – The id of the beginning-of-stream token.

  • pad_token_id (int, optional)) – The id of the padding token.

  • eos_token_id (int, optional)) – The id of the end-of-stream token.

  • decoder_start_token_id (int, optional)) – If an encoder-decoder model starts decoding with a different token than bos, the id of that token.

  • sep_token_id (int, optional)) – The id of the separation token.

PyTorch specific parameters

  • torchscript (bool, optional, defaults to False) – Whether or not the model should be used with Torchscript.

  • tie_word_embeddings (bool, optional, defaults to True) – Whether the model’s input and output word embeddings should be tied. Note that this is only relevant if the model has a output word embedding layer.

  • torch_dtype (str, optional) – The dtype of the weights. This attribute can be used to initialize the model to a non-default dtype (which is normally float32) and thus allow for optimal storage allocation. For example, if the saved model is float16, ideally we want to load it back using the minimal amount of memory needed to load float16 weights. Since the config object is stored in plain text, this attribute contains just the floating type string without the torch. prefix. For example, for torch.float16 torch_dtype is the "float16" string.

    This attribute is currently not being used during model loading time, but this may change in the future versions. But we can already start preparing for the future by saving the dtype with save_pretrained.

TensorFlow specific parameters

  • use_bfloat16 (bool, optional, defaults to False) – Whether or not the model should use BFloat16 scalars (only used by some TensorFlow models).

dict_torch_dtype_to_str(d: Dict[str, Any]) → None[source]ΒΆ

Checks whether the passed dictionary has a torch_dtype key and if it’s not None, converts torch.dtype to a string of just the type. For example, torch.float32 get converted into β€œfloat32” string, which can then be stored in the json format.

classmethod from_dict(config_dict: Dict[str, Any], **kwargs) → transformers.configuration_utils.PretrainedConfig[source]ΒΆ

Instantiates a PretrainedConfig from a Python dictionary of parameters.

Parameters
  • config_dict (Dict[str, Any]) – Dictionary that will be used to instantiate the configuration object. Such a dictionary can be retrieved from a pretrained checkpoint by leveraging the get_config_dict() method.

  • kwargs (Dict[str, Any]) – Additional parameters from which to initialize the configuration object.

Returns

The configuration object instantiated from those parameters.

Return type

PretrainedConfig

classmethod from_json_file(json_file: Union[str, os.PathLike]) → transformers.configuration_utils.PretrainedConfig[source]ΒΆ

Instantiates a PretrainedConfig from the path to a JSON file of parameters.

Parameters

json_file (str or os.PathLike) – Path to the JSON file containing the parameters.

Returns

The configuration object instantiated from that JSON file.

Return type

PretrainedConfig

classmethod from_pretrained(pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) → transformers.configuration_utils.PretrainedConfig[source]ΒΆ

Instantiate a PretrainedConfig (or a derived class) from a pretrained model configuration.

Parameters
  • pretrained_model_name_or_path (str or os.PathLike) –

    This can be either:

    • a string, the model id of a pretrained model configuration hosted inside a model repo on huggingface.co. Valid model ids can be located at the root-level, like bert-base-uncased, or namespaced under a user or organization name, like dbmdz/bert-base-german-cased.

    • a path to a directory containing a configuration file saved using the save_pretrained() method, e.g., ./my_model_directory/.

    • a path or url to a saved configuration JSON file, e.g., ./my_model_directory/configuration.json.

  • cache_dir (str or os.PathLike, optional) – Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used.

  • force_download (bool, optional, defaults to False) – Whether or not to force to (re-)download the configuration files and override the cached versions if they exist.

  • resume_download (bool, optional, defaults to False) – Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists.

  • proxies (Dict[str, str], optional) – A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}. The proxies are used on each request.

  • use_auth_token (str or bool, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in huggingface).

  • revision (str, optional, defaults to "main") – The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, so revision can be any identifier allowed by git.

  • return_unused_kwargs (bool, optional, defaults to False) –

    If False, then this function returns just the final configuration object.

    If True, then this functions returns a Tuple(config, unused_kwargs) where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part of kwargs which has not been used to update config and is otherwise ignored.

  • kwargs (Dict[str, Any], optional) – The values in kwargs of any keys which are configuration attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled by the return_unused_kwargs keyword parameter.

Note

Passing use_auth_token=True is required when you want to use a private model.

Returns

The configuration object instantiated from this pretrained model.

Return type

PretrainedConfig

Examples:

# We can't instantiate directly the base class `PretrainedConfig` so let's show the examples on a
# derived class: BertConfig
config = BertConfig.from_pretrained('bert-base-uncased')    # Download configuration from huggingface.co and cache.
config = BertConfig.from_pretrained('./test/saved_model/')  # E.g. config (or model) was saved using `save_pretrained('./test/saved_model/')`
config = BertConfig.from_pretrained('./test/saved_model/my_configuration.json')
config = BertConfig.from_pretrained('bert-base-uncased', output_attentions=True, foo=False)
assert config.output_attentions == True
config, unused_kwargs = BertConfig.from_pretrained('bert-base-uncased', output_attentions=True,
                                                   foo=False, return_unused_kwargs=True)
assert config.output_attentions == True
assert unused_kwargs == {'foo': False}
classmethod get_config_dict(pretrained_model_name_or_path: Union[str, os.PathLike], **kwargs) → Tuple[Dict[str, Any], Dict[str, Any]][source]ΒΆ

From a pretrained_model_name_or_path, resolve to a dictionary of parameters, to be used for instantiating a PretrainedConfig using from_dict.

Parameters

pretrained_model_name_or_path (str or os.PathLike) – The identifier of the pre-trained checkpoint from which we want the dictionary of parameters.

Returns

The dictionary(ies) that will be used to instantiate the configuration object.

Return type

Tuple[Dict, Dict]

property num_labelsΒΆ

The number of labels for classification models.

Type

int

push_to_hub(repo_path_or_name: Optional[str] = None, repo_url: Optional[str] = None, use_temp_dir: bool = False, commit_message: Optional[str] = None, organization: Optional[str] = None, private: Optional[bool] = None, use_auth_token: Optional[Union[bool, str]] = None) → strΒΆ

Upload the configuration file to the πŸ€— Model Hub while synchronizing a local clone of the repo in repo_path_or_name.

Parameters
  • repo_path_or_name (str, optional) – Can either be a repository name for your config in the Hub or a path to a local folder (in which case the repository will have the name of that local folder). If not specified, will default to the name given by repo_url and a local directory with that name will be created.

  • repo_url (str, optional) – Specify this in case you want to push to an existing repository in the hub. If unspecified, a new repository will be created in your namespace (unless you specify an organization) with repo_name.

  • use_temp_dir (bool, optional, defaults to False) – Whether or not to clone the distant repo in a temporary directory or in repo_path_or_name inside the current working directory. This will slow things down if you are making changes in an existing repo since you will need to clone the repo before every push.

  • commit_message (str, optional) – Message to commit while pushing. Will default to "add config".

  • organization (str, optional) – Organization in which you want to push your config (you must be a member of this organization).

  • private (bool, optional) – Whether or not the repository created should be private (requires a paying subscription).

  • use_auth_token (bool or str, optional) – The token to use as HTTP bearer authorization for remote files. If True, will use the token generated when running transformers-cli login (stored in huggingface). Will default to True if repo_url is not specified.

Returns

The url of the commit of your config in the given repository.

Return type

str

Examples:

from transformers import AutoConfig

config = AutoConfig.from_pretrained("bert-base-cased")

# Push the config to your namespace with the name "my-finetuned-bert" and have a local clone in the
# `my-finetuned-bert` folder.
config.push_to_hub("my-finetuned-bert")

# Push the config to your namespace with the name "my-finetuned-bert" with no local clone.
config.push_to_hub("my-finetuned-bert", use_temp_dir=True)

# Push the config to an organization with the name "my-finetuned-bert" and have a local clone in the
# `my-finetuned-bert` folder.
config.push_to_hub("my-finetuned-bert", organization="huggingface")

# Make a change to an existing repo that has been cloned locally in `my-finetuned-bert`.
config.push_to_hub("my-finetuned-bert", repo_url="https://huggingface.co/sgugger/my-finetuned-bert")
save_pretrained(save_directory: Union[str, os.PathLike], push_to_hub: bool = False, **kwargs)[source]ΒΆ

Save a configuration object to the directory save_directory, so that it can be re-loaded using the from_pretrained() class method.

Parameters
  • save_directory (str or os.PathLike) – Directory where the configuration JSON file will be saved (will be created if it does not exist).

  • push_to_hub (bool, optional, defaults to False) –

    Whether or not to push your model to the Hugging Face model hub after saving it.

    Warning

    Using push_to_hub=True will synchronize the repository you are pushing to with save_directory, which requires save_directory to be a local clone of the repo you are pushing to if it’s an existing folder. Pass along temp_dir=True to use a temporary directory instead.

  • kwargs – Additional key word arguments passed along to the push_to_hub() method.

to_dict() → Dict[str, Any][source]ΒΆ

Serializes this instance to a Python dictionary.

Returns

Dictionary of all the attributes that make up this configuration instance.

Return type

Dict[str, Any]

to_diff_dict() → Dict[str, Any][source]ΒΆ

Removes all attributes from config which correspond to the default config attributes for better readability and serializes to a Python dictionary.

Returns

Dictionary of all the attributes that make up this configuration instance,

Return type

Dict[str, Any]

to_json_file(json_file_path: Union[str, os.PathLike], use_diff: bool = True)[source]ΒΆ

Save this instance to a JSON file.

Parameters
  • json_file_path (str or os.PathLike) – Path to the JSON file in which this configuration instance’s parameters will be saved.

  • use_diff (bool, optional, defaults to True) – If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON file.

to_json_string(use_diff: bool = True) → str[source]ΒΆ

Serializes this instance to a JSON string.

Parameters

use_diff (bool, optional, defaults to True) – If set to True, only the difference between the config instance and the default PretrainedConfig() is serialized to JSON string.

Returns

String containing all the attributes that make up this configuration instance in JSON format.

Return type

str

update(config_dict: Dict[str, Any])[source]ΒΆ

Updates attributes of this class with attributes from config_dict.

Parameters

config_dict (Dict[str, Any]) – Dictionary of attributes that should be updated for this class.

update_from_string(update_str: str)[source]ΒΆ

Updates attributes of this class with attributes from update_str.

The expected format is ints, floats and strings as is, and for booleans use true or false. For example: β€œn_embd=10,resid_pdrop=0.2,scale_attn_weights=false,summary_type=cls_index”

The keys to change have to already exist in the config object.

Parameters

update_str (str) – String with attributes that should be updated for this class.

property use_return_dictΒΆ

Whether or not return ModelOutput instead of tuples.

Type

bool