In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you
are supplying to the from_pretrained()
method. AutoClasses are here to do this job for you so that you
automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary.
Instantiating one of AutoConfig, AutoModel, and AutoTokenizer will directly create a class of the relevant architecture. For instance
model = AutoModel.from_pretrained("bert-base-cased")
will create a model that is an instance of BertModel.
There is one class of AutoModel
for each task, and for each backend (PyTorch, TensorFlow, or Flax).
Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a
custom class of model NewModel
, make sure you have a NewModelConfig
then you can add those to the auto
classes like this:
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
You will then be able to use the auto classes like you would usually do!
If your NewModelConfig
is a subclass of ~transformer.PretrainedConfig
, make sure its
model_type
attribute is set to the same key you use when registering the config (here "new-model"
).
Likewise, if your NewModel
is a subclass of PreTrainedModel, make sure its
config_class
attribute is set to the same class you use when registering the model (here
NewModelConfig
).
This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../my_model_directory/configuration.json
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download the model weights and configuration files and override the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
If False
, then this function returns just the final configuration object.
If True
, then this functions returns a Tuple(config, unused_kwargs)
where unused_kwargs is a
dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the
part of kwargs
which has not been used to update config
and is otherwise ignored.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
return_unused_kwargs
keyword parameter.
Instantiate one of the configuration classes of the library from a pretrained model configuration.
The configuration class to instantiate is selected based on the model_type
property of the config object that
is loaded, or when it’s missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("bert-base-uncased")
>>> # Download configuration from huggingface.co (user-uploaded) and cache.
>>> config = AutoConfig.from_pretrained("dbmdz/bert-base-german-cased")
>>> # If configuration file is in a directory (e.g., was saved using *save_pretrained('./test/saved_model/')*).
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/")
>>> # Load a specific configuration file.
>>> config = AutoConfig.from_pretrained("./test/bert_saved_model/my_configuration.json")
>>> # Change some config attributes when loading a pretrained config.
>>> config = AutoConfig.from_pretrained("bert-base-uncased", output_attentions=True, foo=False)
>>> config.output_attentions
True
>>> config, unused_kwargs = AutoConfig.from_pretrained(
... "bert-base-uncased", output_attentions=True, foo=False, return_unused_kwargs=True
... )
>>> config.output_attentions
True
>>> unused_kwargs
{'foo': False}
( model_type config )
Parameters
str
) — The model type like “bert” or “gpt”.
Register a new configuration for this class.
This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path *inputs **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../my_model_directory/vocab.txt
. (Not
applicable to all derived classes)__init__()
method.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download the model weights and configuration files and override the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
str
, optional) —
In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for
facebook/rag-token-base), specify it here.
bool
, optional, defaults to True
) —
Whether or not to try to load the fast version of the tokenizer.
str
, optional) —
Tokenizer type to be loaded.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
__init__()
method. Can be used to set special tokens like
bos_token
, eos_token
, unk_token
, sep_token
, pad_token
, cls_token
, mask_token
,
additional_special_tokens
. See parameters in the __init__()
for more details.
Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.
The tokenizer class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoTokenizer
>>> # Download vocabulary from huggingface.co and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> # Download vocabulary from huggingface.co (user-uploaded) and cache.
>>> tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-cased")
>>> # If vocabulary files are in a directory (e.g. tokenizer was saved using *save_pretrained('./test/saved_model/')*)
>>> tokenizer = AutoTokenizer.from_pretrained("./test/bert_saved_model/")
>>> # Download vocabulary from huggingface.co and define model-specific arguments
>>> tokenizer = AutoTokenizer.from_pretrained("roberta-base", add_prefix_space=True)
( config_class slow_tokenizer_class = None fast_tokenizer_class = None )
Parameters
PretrainedTokenizer
, optional) —
The slow tokenizer to register.
PretrainedTokenizerFast
, optional) —
The fast tokenizer to register.
Register a new tokenizer in this mapping.
This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the AutoFeatureExtractor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
bert-base-uncased
, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../my_model_directory/preprocessor_config.json
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model feature extractor should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the feature extractor files and override the cached versions
if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received file. Attempts to resume the download if such a file
exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running huggingface-cli login
(stored in ~/.huggingface
).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
If False
, then this function returns just the final feature extractor object. If True
, then this
functions returns a Tuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary
consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of
kwargs
which has not been used to update feature_extractor
and is otherwise ignored.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
Dict[str, Any]
, optional) —
The values in kwargs of any keys which are feature extractor attributes will be used to override the
loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is
controlled by the return_unused_kwargs
keyword parameter.
Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary.
The feature extractor class to instantiate is selected based on the model_type
property of the config object
(either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s
missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
Passing use_auth_token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoFeatureExtractor
>>> # Download feature extractor from huggingface.co and cache.
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base-960h")
>>> # If feature extractor files are in a directory (e.g. feature extractor was saved using *save_pretrained('./test/saved_model/')*)
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("./test/saved_model/")
( config_class feature_extractor_class )
Parameters
FeatureExtractorMixin
) — The feature extractor to register.
Register a new feature extractor for this class.
This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the AutoProcessor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
str
or os.PathLike
) —
This can be either:
bert-base-uncased
, or
namespaced under a user or organization name, like dbmdz/bert-base-german-cased
.save_pretrained()
method,
e.g., ./my_model_directory/
.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model feature extractor should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force to (re-)download the feature extractor files and override the cached versions
if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received file. Attempts to resume the download if such a file
exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request.
str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running huggingface-cli login
(stored in ~/.huggingface
).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
If False
, then this function returns just the final feature extractor object. If True
, then this
functions returns a Tuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary
consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part of
kwargs
which has not been used to update feature_extractor
and is otherwise ignored.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
Dict[str, Any]
, optional) —
The values in kwargs of any keys which are feature extractor attributes will be used to override the
loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is
controlled by the return_unused_kwargs
keyword parameter.
Instantiate one of the processor classes of the library from a pretrained model vocabulary.
The processor class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible):
Passing use_auth_token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoProcessor
>>> # Download processor from huggingface.co and cache.
>>> processor = AutoProcessor.from_pretrained("facebook/wav2vec2-base-960h")
>>> # If processor files are in a directory (e.g. processor was saved using *save_pretrained('./test/saved_model/')*)
>>> processor = AutoProcessor.from_pretrained("./test/saved_model/")
( config_class processor_class )
Parameters
FeatureExtractorMixin
) — The processor to register.
Register a new processor for this class.
This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the base model classes of the library from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModel.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModel.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModel.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForPreTraining.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForPreTraining.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForPreTraining.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCausalLM.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForCausalLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForCausalLM.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a depth estimation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a depth estimation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a depth estimation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForDepthEstimation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDepthEstimation.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForDepthEstimation.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForDepthEstimation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Wav2Vec2ForMaskedLM
(Wav2Vec2 model)Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Wav2Vec2ForMaskedLM
(Wav2Vec2 model)The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedLM.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMaskedLM.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> # Update configuration during loading
>>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/t5_tf_model_config.json")
>>> model = AutoModelForSeq2SeqLM.from_pretrained(
... "./tf_model/t5_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSequenceClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForMultipleChoice.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMultipleChoice.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForNextSentencePrediction.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForNextSentencePrediction.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForNextSentencePrediction.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForTokenClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForQuestionAnswering.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForQuestionAnswering.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
>>> # Update configuration during loading
>>> model = AutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/tapas_tf_model_config.json")
>>> model = AutoModelForTableQuestionAnswering.from_pretrained(
... "./tf_model/tapas_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a document question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = AutoModelForDocumentQuestionAnswering.from_config(config)
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForDocumentQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> # Update configuration during loading
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/layoutlm_tf_model_config.json")
>>> model = AutoModelForDocumentQuestionAnswering.from_pretrained(
... "./tf_model/layoutlm_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a video classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a video classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a video classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVideoClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVideoClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForVideoClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForVideoClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVision2Seq.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForVision2Seq.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForVision2Seq.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a visual question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a visual question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a visual question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVisualQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa")
>>> # Update configuration during loading
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained("dandelin/vilt-b32-finetuned-vqa", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/vilt_tf_model_config.json")
>>> model = AutoModelForVisualQuestionAnswering.from_pretrained(
... "./tf_model/vilt_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a audio classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForAudioClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio frame (token) classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a audio frame (token) classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio frame (token) classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioFrameClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioFrameClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForAudioFrameClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioFrameClassification.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForCTC
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCTC.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForCTC.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForCTC.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSpeechSeq2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSpeechSeq2Seq.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio retrieval via x-vector head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a audio retrieval via x-vector head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a audio retrieval via x-vector head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioXVector
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioXVector.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForAudioXVector.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForAudioXVector.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked image modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a masked image modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked image modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedImageModeling
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedImageModeling.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForMaskedImageModeling.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForMaskedImageModeling.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a object detection head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a object detection head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a object detection head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForObjectDetection
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForObjectDetection.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForObjectDetection.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForObjectDetection.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a image segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForImageSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageSegmentation.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForImageSegmentation.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForImageSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSemanticSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSemanticSegmentation.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForSemanticSegmentation.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForSemanticSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a instance segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a instance segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a instance segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForInstanceSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForInstanceSegmentation.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForInstanceSegmentation.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForInstanceSegmentation.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a zero-shot object detection head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a zero-shot object detection head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../tf_model/model.ckpt.index
). In
this case, from_tf
should be set to True
and a configuration object should be provided as
config
argument. This loading path is slower than converting the TensorFlow checkpoint in a
PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a TensorFlow checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a zero-shot object detection head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForZeroShotObjectDetection
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained("./tf_model/bert_tf_model_config.json")
>>> model = AutoModelForZeroShotObjectDetection.from_pretrained(
... "./tf_model/bert_tf_checkpoint.ckpt.index", from_tf=True, config=config
... )
This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the base model classes of the library from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModel.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModel.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModel.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForPreTraining.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForPreTraining.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForPreTraining.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForCausalLM.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForCausalLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForCausalLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForImageClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForImageClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a semantic segmentation head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a semantic segmentation head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a semantic segmentation head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSemanticSegmentation
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSemanticSegmentation.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMaskedLM.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMaskedLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> # Update configuration during loading
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained("t5-base", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json")
>>> model = TFAutoModelForSeq2SeqLM.from_pretrained(
... "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForSequenceClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSequenceClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForMultipleChoice.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForMultipleChoice.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForNextSentencePrediction.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForTableQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq")
>>> # Update configuration during loading
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained("google/tapas-base-finetuned-wtq", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/tapas_pt_model_config.json")
>>> model = TFAutoModelForTableQuestionAnswering.from_pretrained(
... "./pt_model/tapas_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a document question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a document question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
Examples:
>>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> model = TFAutoModelForDocumentQuestionAnswering.from_config(config)
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a document question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForDocumentQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3")
>>> # Update configuration during loading
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained("impira/layoutlm-document-qa", revision="52e01b3", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/layoutlm_pt_model_config.json")
>>> model = TFAutoModelForDocumentQuestionAnswering.from_pretrained(
... "./pt_model/layoutlm_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForTokenClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForQuestionAnswering.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForQuestionAnswering.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForVision2Seq.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForVision2Seq.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForVision2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence speech-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, TFAutoModelForSpeechSeq2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = TFAutoModelForSpeechSeq2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the base model classes of the library when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the base model classes of the library from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModel.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModel.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModel.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForCausalLM.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForCausalLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForCausalLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForPreTraining.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForPreTraining.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForPreTraining.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForMaskedLM.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForMaskedLM.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForMaskedLM.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("t5-base")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained("t5-base", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/t5_pt_model_config.json")
>>> model = FlaxAutoModelForSeq2SeqLM.from_pretrained(
... "./pt_model/t5_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForSequenceClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForQuestionAnswering.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForTokenClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForTokenClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForTokenClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForMultipleChoice.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForNextSentencePrediction.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForImageClassification.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForImageClassification.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForImageClassification.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created with the from_pretrained() class method or the from_config() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note: Loading a model from its configuration file does not load the model weights. It only affects the model’s configuration. Use from_pretrained() to load the model weights.
( *model_args **kwargs )
Parameters
str
or os.PathLike
) —
Can be either:
bert-base-uncased
, or namespaced under a
user or organization name, like dbmdz/bert-base-german-cased
../my_model_directory/
../pt_model/pytorch_model.bin
). In this
case, from_pt
should be set to True
and a configuration object should be provided as config
argument. This loading path is slower than converting the PyTorch model in a TensorFlow model
using the provided conversion scripts and loading the TensorFlow model afterwards.__init__()
method.
pretrained_model_name_or_path
and a
configuration JSON file named config.json is found in the directory.str
or os.PathLike
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Load the model weights from a PyTorch checkpoint save file (see docstring of
pretrained_model_name_or_path
argument).
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (e.g., not try downloading the model).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
bool
, optional, defaults to False
) —
Whether or not to allow for custom models defined on the Hub in their own modeling files. This option
should only be set to True
for repositories you trust and in which you have read the code, as it will
execute code present on the Hub on your local machine.
output_attentions=True
). Behaves differently depending on whether a config
is provided or
automatically loaded:
config
, **kwargs
will be directly passed to the
underlying model’s __init__
method (we assume all relevant updates to the configuration have
already been done)kwargs
will be first passed to the configuration class
initialization function (from_pretrained()). Each key of kwargs
that
corresponds to a configuration attribute will be used to override said attribute with the
supplied kwargs
value. Remaining keys that do not correspond to any configuration attribute
will be passed to the underlying model’s __init__
function.Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when it’s missing, by
falling back to using pattern matching on pretrained_model_name_or_path
:
Examples:
>>> from transformers import AutoConfig, FlaxAutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = FlaxAutoModelForVision2Seq.from_pretrained("bert-base-cased")
>>> # Update configuration during loading
>>> model = FlaxAutoModelForVision2Seq.from_pretrained("bert-base-cased", output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a PyTorch checkpoint file instead of a TensorFlow model (slower)
>>> config = AutoConfig.from_pretrained("./pt_model/bert_pt_model_config.json")
>>> model = FlaxAutoModelForVision2Seq.from_pretrained(
... "./pt_model/bert_pytorch_model.bin", from_pt=True, config=config
... )