Auto Classes
In many cases, the architecture you want to use can be guessed from the name or the path of the pretrained model you
are supplying to the from_pretrained()
method. AutoClasses are here to do this job for you so that you
automatically retrieve the relevant model given the name/path to the pretrained weights/config/vocabulary.
Instantiating one of AutoConfig, AutoModel, and AutoTokenizer will directly create a class of the relevant architecture. For instance
model = AutoModel.from_pretrained('bert-base-cased')
will create a model that is an instance of BertModel.
There is one class of AutoModel
for each task, and for each backend (PyTorch, TensorFlow, or Flax).
Extending the Auto Classes
Each of the auto classes has a method to be extended with your custom classes. For instance, if you have defined a
custom class of model NewModel
, make sure you have a NewModelConfig
then you can add those to the auto
classes like this:
from transformers import AutoConfig, AutoModel
AutoConfig.register("new-model", NewModelConfig)
AutoModel.register(NewModelConfig, NewModel)
You will then be able to use the auto classes like you would usually do!
If your NewModelConfig
is a subclass of PretrainedConfig
, make sure its
model_type
attribute is set to the same key you use when registering the config (here "new-model"
).
Likewise, if your NewModel
is a subclass of PreTrainedModel, make sure its
config_class
attribute is set to the same class you use when registering the model (here
NewModelConfig
).
AutoConfig
This is a generic configuration class that will be instantiated as one of the configuration classes of the library when created with the from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( pretrained_model_name_or_path **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model configuration hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing a configuration file saved using the
save_pretrained() method, or the
save_pretrained() method, e.g.,
./my_model_directory/
. - A path or url to a saved configuration JSON file, e.g.,
./my_model_directory/configuration.json
.
- A string, the model id of a pretrained model configuration hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
return_unused_kwargs (
bool
, optional, defaults toFalse
) — IfFalse
, then this function returns just the final configuration object.If
True
, then this functions returns aTuple(config, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not configuration attributes: i.e., the part ofkwargs
which has not been used to updateconfig
and is otherwise ignored. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs(additional keyword arguments, optional) —
The values in kwargs of any keys which are configuration attributes will be used to override the loaded
values. Behavior concerning key/value pairs whose keys are not configuration attributes is controlled
by the
return_unused_kwargs
keyword parameter.
Instantiate one of the configuration classes of the library from a pretrained model configuration.
The configuration class to instantiate is selected based on the model_type
property of the config object
that is loaded, or when itβs missing, by falling back to using pattern matching on
pretrained_model_name_or_path
:
- albert β AlbertConfig (ALBERT model)
- bart β BartConfig (BART model)
- beit β BeitConfig (BEiT model)
- bert β BertConfig (BERT model)
- bert-generation β BertGenerationConfig (Bert Generation model)
- big_bird β BigBirdConfig (BigBird model)
- bigbird_pegasus β BigBirdPegasusConfig (BigBirdPegasus model)
- blenderbot β BlenderbotConfig (Blenderbot model)
- blenderbot-small β BlenderbotSmallConfig (BlenderbotSmall model)
- camembert β CamembertConfig (CamemBERT model)
- canine β CanineConfig (Canine model)
- clip β CLIPConfig (CLIP model)
- convbert β ConvBertConfig (ConvBERT model)
- ctrl β CTRLConfig (CTRL model)
- deberta β DebertaConfig (DeBERTa model)
- deberta-v2 β DebertaV2Config (DeBERTa-v2 model)
- deit β DeiTConfig (DeiT model)
- detr β DetrConfig (DETR model)
- distilbert β DistilBertConfig (DistilBERT model)
- dpr β DPRConfig (DPR model)
- electra β ElectraConfig (ELECTRA model)
- encoder-decoder β EncoderDecoderConfig (Encoder decoder model)
- flaubert β FlaubertConfig (FlauBERT model)
- fnet β FNetConfig (FNet model)
- fsmt β FSMTConfig (FairSeq Machine-Translation model)
- funnel β FunnelConfig (Funnel Transformer model)
- gpt2 β GPT2Config (OpenAI GPT-2 model)
- gpt_neo β GPTNeoConfig (GPT Neo model)
- gptj β GPTJConfig (GPT-J model)
- hubert β HubertConfig (Hubert model)
- ibert β IBertConfig (I-BERT model)
- imagegpt β ImageGPTConfig (ImageGPT model)
- layoutlm β LayoutLMConfig (LayoutLM model)
- layoutlmv2 β LayoutLMv2Config (LayoutLMv2 model)
- led β LEDConfig (LED model)
- longformer β LongformerConfig (Longformer model)
- luke β LukeConfig (LUKE model)
- lxmert β LxmertConfig (LXMERT model)
- m2m_100 β M2M100Config (M2M100 model)
- marian β MarianConfig (Marian model)
- mbart β MBartConfig (mBART model)
- megatron-bert β MegatronBertConfig (MegatronBert model)
- mobilebert β MobileBertConfig (MobileBERT model)
- mpnet β MPNetConfig (MPNet model)
- mt5 β MT5Config (mT5 model)
- openai-gpt β OpenAIGPTConfig (OpenAI GPT model)
- pegasus β PegasusConfig (Pegasus model)
- perceiver β PerceiverConfig (Perceiver model)
- prophetnet β ProphetNetConfig (ProphetNet model)
- qdqbert β QDQBertConfig (QDQBert model)
- rag β RagConfig (RAG model)
- reformer β ReformerConfig (Reformer model)
- rembert β RemBertConfig (RemBERT model)
- retribert β RetriBertConfig (RetriBERT model)
- roberta β RobertaConfig (RoBERTa model)
- roformer β RoFormerConfig (RoFormer model)
- segformer β SegformerConfig (SegFormer model)
- sew β SEWConfig (SEW model)
- sew-d β SEWDConfig (SEW-D model)
- speech-encoder-decoder β SpeechEncoderDecoderConfig (Speech Encoder decoder model)
- speech_to_text β Speech2TextConfig (Speech2Text model)
- speech_to_text_2 β Speech2Text2Config (Speech2Text2 model)
- splinter β SplinterConfig (Splinter model)
- squeezebert β SqueezeBertConfig (SqueezeBERT model)
- t5 β T5Config (T5 model)
- tapas β TapasConfig (TAPAS model)
- transfo-xl β TransfoXLConfig (Transformer-XL model)
- trocr β TrOCRConfig (TrOCR model)
- unispeech β UniSpeechConfig (UniSpeech model)
- unispeech-sat β UniSpeechSatConfig (UniSpeechSat model)
- vision-encoder-decoder β VisionEncoderDecoderConfig (Vision Encoder decoder model)
- vision-text-dual-encoder β VisionTextDualEncoderConfig (VisionTextDualEncoder model)
- visual_bert β VisualBertConfig (VisualBert model)
- vit β ViTConfig (ViT model)
- wav2vec2 β Wav2Vec2Config (Wav2Vec2 model)
- xlm β XLMConfig (XLM model)
- xlm-prophetnet β XLMProphetNetConfig (XLMProphetNet model)
- xlm-roberta β XLMRobertaConfig (XLM-RoBERTa model)
- xlnet β XLNetConfig (XLNet model)
Examples:
>>> from transformers import AutoConfig
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-uncased')
>>> # Download configuration from huggingface.co (user-uploaded) and cache.
>>> config = AutoConfig.from_pretrained('dbmdz/bert-base-german-cased')
>>> # If configuration file is in a directory (e.g., was saved using _save_pretrained('./test/saved_model/')_).
>>> config = AutoConfig.from_pretrained('./test/bert_saved_model/')
>>> # Load a specific configuration file.
>>> config = AutoConfig.from_pretrained('./test/bert_saved_model/my_configuration.json')
>>> # Change some config attributes when loading a pretrained config.
>>> config = AutoConfig.from_pretrained('bert-base-uncased', output_attentions=True, foo=False)
>>> config.output_attentions
True
>>> config, unused_kwargs = AutoConfig.from_pretrained('bert-base-uncased', output_attentions=True, foo=False, return_unused_kwargs=True)
>>> config.output_attentions
True
>>> config.unused_kwargs
{'foo': False}
( model_type config )
Parameters
-
model_type (
str
) — The model type like “bert” or “gpt”. - config (PretrainedConfig) — The config to register.
Register a new configuration for this class.
AutoTokenizer
This is a generic tokenizer class that will be instantiated as one of the tokenizer classes of the library when created with the AutoTokenizer.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
Instantiate one of the tokenizer classes of the library from a pretrained model vocabulary.
The tokenizer class to instantiate is selected based on the model_type
property of the config object
(either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs
missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertTokenizer or AlbertTokenizerFast (ALBERT model)
- bart β BartTokenizer or BartTokenizerFast (BART model)
- barthez β BarthezTokenizer or BarthezTokenizerFast (BARThez model)
- bartpho β BartphoTokenizer (BARTpho model)
- bert β BertTokenizer or BertTokenizerFast (BERT model)
- bert-generation β BertGenerationTokenizer (Bert Generation model)
- bert-japanese β BertJapaneseTokenizer (BertJapanese model)
- bertweet β BertweetTokenizer (Bertweet model)
- big_bird β BigBirdTokenizer or BigBirdTokenizerFast (BigBird model)
- bigbird_pegasus β PegasusTokenizer or PegasusTokenizerFast (BigBirdPegasus model)
- blenderbot β BlenderbotTokenizer or BlenderbotTokenizerFast (Blenderbot model)
- blenderbot-small β BlenderbotSmallTokenizer (BlenderbotSmall model)
- byt5 β ByT5Tokenizer (ByT5 model)
- camembert β CamembertTokenizer or CamembertTokenizerFast (CamemBERT model)
- canine β CanineTokenizer (Canine model)
- clip β CLIPTokenizer or CLIPTokenizerFast (CLIP model)
- convbert β ConvBertTokenizer or ConvBertTokenizerFast (ConvBERT model)
- cpm β CpmTokenizer or
CpmTokenizerFast
(CPM model) - ctrl β CTRLTokenizer (CTRL model)
- deberta β DebertaTokenizer or DebertaTokenizerFast (DeBERTa model)
- deberta-v2 β DebertaV2Tokenizer (DeBERTa-v2 model)
- distilbert β DistilBertTokenizer or DistilBertTokenizerFast (DistilBERT model)
- dpr β DPRQuestionEncoderTokenizer or DPRQuestionEncoderTokenizerFast (DPR model)
- electra β ElectraTokenizer or ElectraTokenizerFast (ELECTRA model)
- flaubert β FlaubertTokenizer (FlauBERT model)
- fnet β FNetTokenizer or FNetTokenizerFast (FNet model)
- fsmt β FSMTTokenizer (FairSeq Machine-Translation model)
- funnel β FunnelTokenizer or FunnelTokenizerFast (Funnel Transformer model)
- gpt2 β GPT2Tokenizer or GPT2TokenizerFast (OpenAI GPT-2 model)
- gpt_neo β GPT2Tokenizer or GPT2TokenizerFast (GPT Neo model)
- hubert β Wav2Vec2CTCTokenizer (Hubert model)
- ibert β RobertaTokenizer or RobertaTokenizerFast (I-BERT model)
- layoutlm β LayoutLMTokenizer or LayoutLMTokenizerFast (LayoutLM model)
- layoutlmv2 β LayoutLMv2Tokenizer or LayoutLMv2TokenizerFast (LayoutLMv2 model)
- led β LEDTokenizer or LEDTokenizerFast (LED model)
- longformer β LongformerTokenizer or LongformerTokenizerFast (Longformer model)
- luke β LukeTokenizer (LUKE model)
- lxmert β LxmertTokenizer or LxmertTokenizerFast (LXMERT model)
- m2m_100 β M2M100Tokenizer (M2M100 model)
- marian β MarianTokenizer (Marian model)
- mbart β MBartTokenizer or MBartTokenizerFast (mBART model)
- mbart50 β MBart50Tokenizer or MBart50TokenizerFast (mBART-50 model)
- mobilebert β MobileBertTokenizer or MobileBertTokenizerFast (MobileBERT model)
- mpnet β MPNetTokenizer or MPNetTokenizerFast (MPNet model)
- mt5 β MT5Tokenizer or MT5TokenizerFast (mT5 model)
- openai-gpt β OpenAIGPTTokenizer or OpenAIGPTTokenizerFast (OpenAI GPT model)
- pegasus β PegasusTokenizer or PegasusTokenizerFast (Pegasus model)
- perceiver β PerceiverTokenizer (Perceiver model)
- phobert β PhobertTokenizer (PhoBERT model)
- prophetnet β ProphetNetTokenizer (ProphetNet model)
- qdqbert β BertTokenizer or BertTokenizerFast (QDQBert model)
- rag β RagTokenizer (RAG model)
- reformer β ReformerTokenizer or ReformerTokenizerFast (Reformer model)
- rembert β RemBertTokenizer or RemBertTokenizerFast (RemBERT model)
- retribert β RetriBertTokenizer or RetriBertTokenizerFast (RetriBERT model)
- roberta β RobertaTokenizer or RobertaTokenizerFast (RoBERTa model)
- roformer β RoFormerTokenizer or RoFormerTokenizerFast (RoFormer model)
- speech_to_text β Speech2TextTokenizer (Speech2Text model)
- speech_to_text_2 β Speech2Text2Tokenizer (Speech2Text2 model)
- splinter β SplinterTokenizer or SplinterTokenizerFast (Splinter model)
- squeezebert β SqueezeBertTokenizer or SqueezeBertTokenizerFast (SqueezeBERT model)
- t5 β T5Tokenizer or T5TokenizerFast (T5 model)
- tapas β TapasTokenizer (TAPAS model)
- transfo-xl β TransfoXLTokenizer (Transformer-XL model)
- wav2vec2 β Wav2Vec2CTCTokenizer (Wav2Vec2 model)
- xlm β XLMTokenizer (XLM model)
- xlm-prophetnet β XLMProphetNetTokenizer (XLMProphetNet model)
- xlm-roberta β XLMRobertaTokenizer or XLMRobertaTokenizerFast (XLM-RoBERTa model)
- xlnet β XLNetTokenizer or XLNetTokenizerFast (XLNet model)
Params:
pretrained_model_name_or_path (str
or os.PathLike
):
Can be either:
- A string, the model id of a predefined tokenizer hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing vocabulary files required by the tokenizer, for instance saved
using the save_pretrained() method, e.g.,
./my_model_directory/
. - A path or url to a single saved vocabulary file if and only if the tokenizer only requires a
single vocabulary file (like Bert or XLNet), e.g.:
./my_model_directory/vocab.txt
. (Not applicable to all derived classes) inputs (additional positional arguments, optional): Will be passed along to the Tokenizer__init__()
method. config (PretrainedConfig, optional) The configuration object used to dertermine the tokenizer class to instantiate. cachedir (str
oros.PathLike
, _optional): Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. forcedownload (bool
, _optional, defaults toFalse
): Whether or not to force the (re-)download the model weights and configuration files and override the cached versions if they exist. resumedownload (bool
, _optional, defaults toFalse
): Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. proxies (Dict[str, str]
, optional): A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. revision(str
, optional, defaults to"main"
): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. subfolder (str
, optional): In case the relevant files are located inside a subfolder of the model repo on huggingface.co (e.g. for facebook/rag-token-base), specify it here. usefast (bool
, _optional, defaults toTrue
): Whether or not to try to load the fast version of the tokenizer. tokenizertype (str
, _optional): Tokenizer type to be loaded. trustremote_code (bool
, _optional, defaults toFalse
): Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. kwargs (additional keyword arguments, optional): Will be passed to the Tokenizer__init__()
method. Can be used to set special tokens likebos_token
,eos_token
,unk_token
,sep_token
,pad_token
,cls_token
,mask_token
,additional_special_tokens
. See parameters in the__init__()
for more details.
Examples:
>>> from transformers import AutoTokenizer
>>> # Download vocabulary from huggingface.co and cache.
>>> tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
>>> # Download vocabulary from huggingface.co (user-uploaded) and cache.
>>> tokenizer = AutoTokenizer.from_pretrained('dbmdz/bert-base-german-cased')
>>> # If vocabulary files are in a directory (e.g. tokenizer was saved using _save_pretrained('./test/saved_model/')_)
>>> tokenizer = AutoTokenizer.from_pretrained('./test/bert_saved_model/')
( config_class slow_tokenizer_class = None fast_tokenizer_class = None )
Parameters
- config_class (PretrainedConfig) — The configuration corresponding to the model to register.
-
slow_tokenizer_class (
PretrainedTokenizer
, optional) — The slow tokenizer to register. -
slow_tokenizer_class (
PretrainedTokenizerFast
, optional) — The fast tokenizer to register.
Register a new tokenizer in this mapping.
AutoFeatureExtractor
This is a generic feature extractor class that will be instantiated as one of the feature extractor classes of the library when created with the AutoFeatureExtractor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
Instantiate one of the feature extractor classes of the library from a pretrained model vocabulary.
The feature extractor class to instantiate is selected based on the model_type
property of the config
object (either passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when
itβs missing, by falling back to using pattern matching on pretrained_model_name_or_path
:
- beit β BeitFeatureExtractor (BEiT model)
- clip β CLIPFeatureExtractor (CLIP model)
- deit β DeiTFeatureExtractor (DeiT model)
- detr β DetrFeatureExtractor (DETR model)
- hubert β Wav2Vec2FeatureExtractor (Hubert model)
- layoutlmv2 β LayoutLMv2FeatureExtractor (LayoutLMv2 model)
- perceiver β PerceiverFeatureExtractor (Perceiver model)
- speech_to_text β Speech2TextFeatureExtractor (Speech2Text model)
- vit β ViTFeatureExtractor (ViT model)
- wav2vec2 β Wav2Vec2FeatureExtractor (Wav2Vec2 model)
Params:
pretrained_model_name_or_path (str
or os.PathLike
):
This can be either:
- a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - a path to a directory containing a feature extractor file saved using the
save_pretrained() method, e.g.,
./my_model_directory/
. - a path or url to a saved feature extractor JSON file, e.g.,
./my_model_directory/preprocessor_config.json
. cachedir (str
oros.PathLike
, _optional): Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. forcedownload (bool
, _optional, defaults toFalse
): Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resumedownload (bool
, _optional, defaults toFalse
): Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str]
, optional): A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. useauth_token (str
or _bool, optional): The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored in~/.huggingface
). revision(str
, optional, defaults to"main"
): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. returnunused_kwargs (bool
, _optional, defaults toFalse
): IfFalse
, then this function returns just the final feature extractor object. IfTrue
, then this functions returns aTuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part ofkwargs
which has not been used to updatefeature_extractor
and is otherwise ignored. kwargs (Dict[str, Any]
, optional): The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by thereturn_unused_kwargs
keyword parameter.
Passing use_auth_token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoFeatureExtractor
>>> # Download feature extractor from huggingface.co and cache.
>>> feature_extractor = AutoFeatureExtractor.from_pretrained('facebook/wav2vec2-base-960h')
>>> # If feature extractor files are in a directory (e.g. feature extractor was saved using _save_pretrained('./test/saved_model/')_)
>>> feature_extractor = AutoFeatureExtractor.from_pretrained('./test/saved_model/')
AutoProcessor
This is a generic processor class that will be instantiated as one of the processor classes of the library when created with the AutoProcessor.from_pretrained() class method.
This class cannot be instantiated directly using __init__()
(throws an error).
Instantiate one of the processor classes of the library from a pretrained model vocabulary.
The processor class to instantiate is selected based on the model_type
property of the config object
(either passed as an argument or loaded from pretrained_model_name_or_path
if possible):
- clip β CLIPProcessor (CLIP model)
- layoutlmv2 β LayoutLMv2Processor (LayoutLMv2 model)
- speech_to_text β Speech2TextProcessor (Speech2Text model)
- speech_to_text_2 β Speech2Text2Processor (Speech2Text2 model)
- trocr β TrOCRProcessor (TrOCR model)
- vision-text-dual-encoder β VisionTextDualEncoderProcessor (VisionTextDualEncoder model)
- wav2vec2 β Wav2Vec2Processor (Wav2Vec2 model)
Params:
pretrained_model_name_or_path (str
or os.PathLike
):
This can be either:
- a string, the model id of a pretrained feature_extractor hosted inside a model repo on
huggingface.co. Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - a path to a directory containing a processor files saved using the
save_pretrained()
method, e.g.,./my_model_directory/
. cachedir (str
oros.PathLike
, _optional): Path to a directory in which a downloaded pretrained model feature extractor should be cached if the standard cache should not be used. forcedownload (bool
, _optional, defaults toFalse
): Whether or not to force to (re-)download the feature extractor files and override the cached versions if they exist. resumedownload (bool
, _optional, defaults toFalse
): Whether or not to delete incompletely received file. Attempts to resume the download if such a file exists. proxies (Dict[str, str]
, optional): A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}.
The proxies are used on each request. useauth_token (str
or _bool, optional): The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored in~/.huggingface
). revision (str
, optional, defaults to"main"
): The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. returnunused_kwargs (bool
, _optional, defaults toFalse
): IfFalse
, then this function returns just the final feature extractor object. IfTrue
, then this functions returns aTuple(feature_extractor, unused_kwargs)
where unused_kwargs is a dictionary consisting of the key/value pairs whose keys are not feature extractor attributes: i.e., the part ofkwargs
which has not been used to updatefeature_extractor
and is otherwise ignored. kwargs (Dict[str, Any]
, optional): The values in kwargs of any keys which are feature extractor attributes will be used to override the loaded values. Behavior concerning key/value pairs whose keys are not feature extractor attributes is controlled by thereturn_unused_kwargs
keyword parameter.
Passing use_auth_token=True
is required when you want to use a private model.
Examples:
>>> from transformers import AutoProcessor
>>> # Download processor from huggingface.co and cache.
>>> processor = AutoProcessor.from_pretrained('facebook/wav2vec2-base-960h')
>>> # If processor files are in a directory (e.g. processor was saved using _save_pretrained('./test/saved_model/')_)
>>> processor = AutoProcessor.from_pretrained('./test/saved_model/')
AutoModel
This is a generic model class that will be instantiated as one of the base model classes of the library when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertModel (ALBERT model)
- BartConfig configuration class: BartModel (BART model)
- BeitConfig configuration class: BeitModel (BEiT model)
- BertConfig configuration class: BertModel (BERT model)
- BertGenerationConfig configuration class: BertGenerationEncoder (Bert Generation model)
- BigBirdConfig configuration class: BigBirdModel (BigBird model)
- BigBirdPegasusConfig configuration class: BigBirdPegasusModel (BigBirdPegasus model)
- BlenderbotConfig configuration class: BlenderbotModel (Blenderbot model)
- BlenderbotSmallConfig configuration class: BlenderbotSmallModel (BlenderbotSmall model)
- CLIPConfig configuration class: CLIPModel (CLIP model)
- CTRLConfig configuration class: CTRLModel (CTRL model)
- CamembertConfig configuration class: CamembertModel (CamemBERT model)
- CanineConfig configuration class: CanineModel (Canine model)
- ConvBertConfig configuration class: ConvBertModel (ConvBERT model)
- DPRConfig configuration class: DPRQuestionEncoder (DPR model)
- DebertaConfig configuration class: DebertaModel (DeBERTa model)
- DebertaV2Config configuration class: DebertaV2Model (DeBERTa-v2 model)
- DeiTConfig configuration class: DeiTModel (DeiT model)
- DetrConfig configuration class: DetrModel (DETR model)
- DistilBertConfig configuration class: DistilBertModel (DistilBERT model)
- ElectraConfig configuration class: ElectraModel (ELECTRA model)
- FNetConfig configuration class: FNetModel (FNet model)
- FSMTConfig configuration class: FSMTModel (FairSeq Machine-Translation model)
- FlaubertConfig configuration class: FlaubertModel (FlauBERT model)
- FunnelConfig configuration class: FunnelModel or FunnelBaseModel (Funnel Transformer model)
- GPT2Config configuration class: GPT2Model (OpenAI GPT-2 model)
- GPTJConfig configuration class: GPTJModel (GPT-J model)
- GPTNeoConfig configuration class: GPTNeoModel (GPT Neo model)
- HubertConfig configuration class: HubertModel (Hubert model)
- IBertConfig configuration class: IBertModel (I-BERT model)
- ImageGPTConfig configuration class: ImageGPTModel (ImageGPT model)
- LEDConfig configuration class: LEDModel (LED model)
- LayoutLMConfig configuration class: LayoutLMModel (LayoutLM model)
- LayoutLMv2Config configuration class: LayoutLMv2Model (LayoutLMv2 model)
- LongformerConfig configuration class: LongformerModel (Longformer model)
- LukeConfig configuration class: LukeModel (LUKE model)
- LxmertConfig configuration class: LxmertModel (LXMERT model)
- M2M100Config configuration class: M2M100Model (M2M100 model)
- MBartConfig configuration class: MBartModel (mBART model)
- MPNetConfig configuration class: MPNetModel (MPNet model)
- MT5Config configuration class: MT5Model (mT5 model)
- MarianConfig configuration class: MarianModel (Marian model)
- MegatronBertConfig configuration class: MegatronBertModel (MegatronBert model)
- MobileBertConfig configuration class: MobileBertModel (MobileBERT model)
- OpenAIGPTConfig configuration class: OpenAIGPTModel (OpenAI GPT model)
- PegasusConfig configuration class: PegasusModel (Pegasus model)
- PerceiverConfig configuration class: PerceiverModel (Perceiver model)
- ProphetNetConfig configuration class: ProphetNetModel (ProphetNet model)
- QDQBertConfig configuration class: QDQBertModel (QDQBert model)
- ReformerConfig configuration class: ReformerModel (Reformer model)
- RemBertConfig configuration class: RemBertModel (RemBERT model)
- RetriBertConfig configuration class: RetriBertModel (RetriBERT model)
- RoFormerConfig configuration class: RoFormerModel (RoFormer model)
- RobertaConfig configuration class: RobertaModel (RoBERTa model)
- SEWConfig configuration class: SEWModel (SEW model)
- SEWDConfig configuration class: SEWDModel (SEW-D model)
- SegformerConfig configuration class: SegformerModel (SegFormer model)
- Speech2TextConfig configuration class: Speech2TextModel (Speech2Text model)
- SplinterConfig configuration class: SplinterModel (Splinter model)
- SqueezeBertConfig configuration class: SqueezeBertModel (SqueezeBERT model)
- T5Config configuration class: T5Model (T5 model)
- TapasConfig configuration class: TapasModel (TAPAS model)
- TransfoXLConfig configuration class: TransfoXLModel (Transformer-XL model)
- UniSpeechConfig configuration class: UniSpeechModel (UniSpeech model)
- UniSpeechSatConfig configuration class: UniSpeechSatModel (UniSpeechSat model)
- ViTConfig configuration class: ViTModel (ViT model)
- VisionTextDualEncoderConfig configuration class: VisionTextDualEncoderModel (VisionTextDualEncoder model)
- VisualBertConfig configuration class: VisualBertModel (VisualBert model)
- Wav2Vec2Config configuration class: Wav2Vec2Model (Wav2Vec2 model)
- XLMConfig configuration class: XLMModel (XLM model)
- XLMProphetNetConfig configuration class: XLMProphetNetModel (XLMProphetNet model)
- XLMRobertaConfig configuration class: XLMRobertaModel (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetModel (XLNet model)
Instantiates one of the base model classes of the library from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModel
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModel.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the base model classes of the library from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertModel (ALBERT model)
- bart β BartModel (BART model)
- beit β BeitModel (BEiT model)
- bert β BertModel (BERT model)
- bert-generation β BertGenerationEncoder (Bert Generation model)
- big_bird β BigBirdModel (BigBird model)
- bigbird_pegasus β BigBirdPegasusModel (BigBirdPegasus model)
- blenderbot β BlenderbotModel (Blenderbot model)
- blenderbot-small β BlenderbotSmallModel (BlenderbotSmall model)
- camembert β CamembertModel (CamemBERT model)
- canine β CanineModel (Canine model)
- clip β CLIPModel (CLIP model)
- convbert β ConvBertModel (ConvBERT model)
- ctrl β CTRLModel (CTRL model)
- deberta β DebertaModel (DeBERTa model)
- deberta-v2 β DebertaV2Model (DeBERTa-v2 model)
- deit β DeiTModel (DeiT model)
- detr β DetrModel (DETR model)
- distilbert β DistilBertModel (DistilBERT model)
- dpr β DPRQuestionEncoder (DPR model)
- electra β ElectraModel (ELECTRA model)
- flaubert β FlaubertModel (FlauBERT model)
- fnet β FNetModel (FNet model)
- fsmt β FSMTModel (FairSeq Machine-Translation model)
- funnel β FunnelModel or FunnelBaseModel (Funnel Transformer model)
- gpt2 β GPT2Model (OpenAI GPT-2 model)
- gpt_neo β GPTNeoModel (GPT Neo model)
- gptj β GPTJModel (GPT-J model)
- hubert β HubertModel (Hubert model)
- ibert β IBertModel (I-BERT model)
- imagegpt β ImageGPTModel (ImageGPT model)
- layoutlm β LayoutLMModel (LayoutLM model)
- layoutlmv2 β LayoutLMv2Model (LayoutLMv2 model)
- led β LEDModel (LED model)
- longformer β LongformerModel (Longformer model)
- luke β LukeModel (LUKE model)
- lxmert β LxmertModel (LXMERT model)
- m2m_100 β M2M100Model (M2M100 model)
- marian β MarianModel (Marian model)
- mbart β MBartModel (mBART model)
- megatron-bert β MegatronBertModel (MegatronBert model)
- mobilebert β MobileBertModel (MobileBERT model)
- mpnet β MPNetModel (MPNet model)
- mt5 β MT5Model (mT5 model)
- openai-gpt β OpenAIGPTModel (OpenAI GPT model)
- pegasus β PegasusModel (Pegasus model)
- perceiver β PerceiverModel (Perceiver model)
- prophetnet β ProphetNetModel (ProphetNet model)
- qdqbert β QDQBertModel (QDQBert model)
- reformer β ReformerModel (Reformer model)
- rembert β RemBertModel (RemBERT model)
- retribert β RetriBertModel (RetriBERT model)
- roberta β RobertaModel (RoBERTa model)
- roformer β RoFormerModel (RoFormer model)
- segformer β SegformerModel (SegFormer model)
- sew β SEWModel (SEW model)
- sew-d β SEWDModel (SEW-D model)
- speech_to_text β Speech2TextModel (Speech2Text model)
- splinter β SplinterModel (Splinter model)
- squeezebert β SqueezeBertModel (SqueezeBERT model)
- t5 β T5Model (T5 model)
- tapas β TapasModel (TAPAS model)
- transfo-xl β TransfoXLModel (Transformer-XL model)
- unispeech β UniSpeechModel (UniSpeech model)
- unispeech-sat β UniSpeechSatModel (UniSpeechSat model)
- vision-text-dual-encoder β VisionTextDualEncoderModel (VisionTextDualEncoder model)
- visual_bert β VisualBertModel (VisualBert model)
- vit β ViTModel (ViT model)
- wav2vec2 β Wav2Vec2Model (Wav2Vec2 model)
- xlm β XLMModel (XLM model)
- xlm-prophetnet β XLMProphetNetModel (XLMProphetNet model)
- xlm-roberta β XLMRobertaModel (XLM-RoBERTa model)
- xlnet β XLNetModel (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModel
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModel.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModel.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModel.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForPreTraining
This is a generic model class that will be instantiated as one of the model classes of the library (with a pretraining head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertForPreTraining (ALBERT model)
- BartConfig configuration class: BartForConditionalGeneration (BART model)
- BertConfig configuration class: BertForPreTraining (BERT model)
- BigBirdConfig configuration class: BigBirdForPreTraining (BigBird model)
- CTRLConfig configuration class: CTRLLMHeadModel (CTRL model)
- CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model)
- DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model)
- DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model)
- DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model)
- ElectraConfig configuration class: ElectraForPreTraining (ELECTRA model)
- FNetConfig configuration class: FNetForPreTraining (FNet model)
- FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model)
- FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model)
- FunnelConfig configuration class: FunnelForPreTraining (Funnel Transformer model)
- GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model)
- IBertConfig configuration class: IBertForMaskedLM (I-BERT model)
- LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model)
- LongformerConfig configuration class: LongformerForMaskedLM (Longformer model)
- LxmertConfig configuration class: LxmertForPreTraining (LXMERT model)
- MPNetConfig configuration class: MPNetForMaskedLM (MPNet model)
- MegatronBertConfig configuration class: MegatronBertForPreTraining (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForPreTraining (MobileBERT model)
- OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model)
- RetriBertConfig configuration class: RetriBertModel (RetriBERT model)
- RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model)
- SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model)
- T5Config configuration class: T5ForConditionalGeneration (T5 model)
- TapasConfig configuration class: TapasForMaskedLM (TAPAS model)
- TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model)
- UniSpeechConfig configuration class: UniSpeechForPreTraining (UniSpeech model)
- UniSpeechSatConfig configuration class: UniSpeechSatForPreTraining (UniSpeechSat model)
- VisualBertConfig configuration class: VisualBertForPreTraining (VisualBert model)
- Wav2Vec2Config configuration class: Wav2Vec2ForPreTraining (Wav2Vec2 model)
- XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
- XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetLMHeadModel (XLNet model)
Instantiates one of the model classes of the library (with a pretraining head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForPreTraining
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForPreTraining.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a pretraining head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertForPreTraining (ALBERT model)
- bart β BartForConditionalGeneration (BART model)
- bert β BertForPreTraining (BERT model)
- big_bird β BigBirdForPreTraining (BigBird model)
- camembert β CamembertForMaskedLM (CamemBERT model)
- ctrl β CTRLLMHeadModel (CTRL model)
- deberta β DebertaForMaskedLM (DeBERTa model)
- deberta-v2 β DebertaV2ForMaskedLM (DeBERTa-v2 model)
- distilbert β DistilBertForMaskedLM (DistilBERT model)
- electra β ElectraForPreTraining (ELECTRA model)
- flaubert β FlaubertWithLMHeadModel (FlauBERT model)
- fnet β FNetForPreTraining (FNet model)
- fsmt β FSMTForConditionalGeneration (FairSeq Machine-Translation model)
- funnel β FunnelForPreTraining (Funnel Transformer model)
- gpt2 β GPT2LMHeadModel (OpenAI GPT-2 model)
- ibert β IBertForMaskedLM (I-BERT model)
- layoutlm β LayoutLMForMaskedLM (LayoutLM model)
- longformer β LongformerForMaskedLM (Longformer model)
- lxmert β LxmertForPreTraining (LXMERT model)
- megatron-bert β MegatronBertForPreTraining (MegatronBert model)
- mobilebert β MobileBertForPreTraining (MobileBERT model)
- mpnet β MPNetForMaskedLM (MPNet model)
- openai-gpt β OpenAIGPTLMHeadModel (OpenAI GPT model)
- retribert β RetriBertModel (RetriBERT model)
- roberta β RobertaForMaskedLM (RoBERTa model)
- squeezebert β SqueezeBertForMaskedLM (SqueezeBERT model)
- t5 β T5ForConditionalGeneration (T5 model)
- tapas β TapasForMaskedLM (TAPAS model)
- transfo-xl β TransfoXLLMHeadModel (Transformer-XL model)
- unispeech β UniSpeechForPreTraining (UniSpeech model)
- unispeech-sat β UniSpeechSatForPreTraining (UniSpeechSat model)
- visual_bert β VisualBertForPreTraining (VisualBert model)
- wav2vec2 β Wav2Vec2ForPreTraining (Wav2Vec2 model)
- xlm β XLMWithLMHeadModel (XLM model)
- xlm-roberta β XLMRobertaForMaskedLM (XLM-RoBERTa model)
- xlnet β XLNetLMHeadModel (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForPreTraining
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForPreTraining.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForPreTraining.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForPreTraining.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForCausalLM
This is a generic model class that will be instantiated as one of the model classes of the library (with a causal language modeling head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- BartConfig configuration class: BartForCausalLM (BART model)
- BertConfig configuration class: BertLMHeadModel (BERT model)
- BertGenerationConfig configuration class: BertGenerationDecoder (Bert Generation model)
- BigBirdConfig configuration class: BigBirdForCausalLM (BigBird model)
- BigBirdPegasusConfig configuration class: BigBirdPegasusForCausalLM (BigBirdPegasus model)
- BlenderbotConfig configuration class: BlenderbotForCausalLM (Blenderbot model)
- BlenderbotSmallConfig configuration class: BlenderbotSmallForCausalLM (BlenderbotSmall model)
- CTRLConfig configuration class: CTRLLMHeadModel (CTRL model)
- CamembertConfig configuration class: CamembertForCausalLM (CamemBERT model)
- GPT2Config configuration class: GPT2LMHeadModel (OpenAI GPT-2 model)
- GPTJConfig configuration class: GPTJForCausalLM (GPT-J model)
- GPTNeoConfig configuration class: GPTNeoForCausalLM (GPT Neo model)
- MBartConfig configuration class: MBartForCausalLM (mBART model)
- MarianConfig configuration class: MarianForCausalLM (Marian model)
- MegatronBertConfig configuration class: MegatronBertForCausalLM (MegatronBert model)
- OpenAIGPTConfig configuration class: OpenAIGPTLMHeadModel (OpenAI GPT model)
- PegasusConfig configuration class: PegasusForCausalLM (Pegasus model)
- ProphetNetConfig configuration class: ProphetNetForCausalLM (ProphetNet model)
- QDQBertConfig configuration class: QDQBertLMHeadModel (QDQBert model)
- ReformerConfig configuration class: ReformerModelWithLMHead (Reformer model)
- RemBertConfig configuration class: RemBertForCausalLM (RemBERT model)
- RoFormerConfig configuration class: RoFormerForCausalLM (RoFormer model)
- RobertaConfig configuration class: RobertaForCausalLM (RoBERTa model)
- Speech2Text2Config configuration class: Speech2Text2ForCausalLM (Speech2Text2 model)
- TrOCRConfig configuration class: TrOCRForCausalLM (TrOCR model)
- TransfoXLConfig configuration class: TransfoXLLMHeadModel (Transformer-XL model)
- XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
- XLMProphetNetConfig configuration class: XLMProphetNetForCausalLM (XLMProphetNet model)
- XLMRobertaConfig configuration class: XLMRobertaForCausalLM (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetLMHeadModel (XLNet model)
Instantiates one of the model classes of the library (with a causal language modeling head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForCausalLM.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a causal language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- bart β BartForCausalLM (BART model)
- bert β BertLMHeadModel (BERT model)
- bert-generation β BertGenerationDecoder (Bert Generation model)
- big_bird β BigBirdForCausalLM (BigBird model)
- bigbird_pegasus β BigBirdPegasusForCausalLM (BigBirdPegasus model)
- blenderbot β BlenderbotForCausalLM (Blenderbot model)
- blenderbot-small β BlenderbotSmallForCausalLM (BlenderbotSmall model)
- camembert β CamembertForCausalLM (CamemBERT model)
- ctrl β CTRLLMHeadModel (CTRL model)
- gpt2 β GPT2LMHeadModel (OpenAI GPT-2 model)
- gpt_neo β GPTNeoForCausalLM (GPT Neo model)
- gptj β GPTJForCausalLM (GPT-J model)
- marian β MarianForCausalLM (Marian model)
- mbart β MBartForCausalLM (mBART model)
- megatron-bert β MegatronBertForCausalLM (MegatronBert model)
- openai-gpt β OpenAIGPTLMHeadModel (OpenAI GPT model)
- pegasus β PegasusForCausalLM (Pegasus model)
- prophetnet β ProphetNetForCausalLM (ProphetNet model)
- qdqbert β QDQBertLMHeadModel (QDQBert model)
- reformer β ReformerModelWithLMHead (Reformer model)
- rembert β RemBertForCausalLM (RemBERT model)
- roberta β RobertaForCausalLM (RoBERTa model)
- roformer β RoFormerForCausalLM (RoFormer model)
- speech_to_text_2 β Speech2Text2ForCausalLM (Speech2Text2 model)
- transfo-xl β TransfoXLLMHeadModel (Transformer-XL model)
- trocr β TrOCRForCausalLM (TrOCR model)
- xlm β XLMWithLMHeadModel (XLM model)
- xlm-prophetnet β XLMProphetNetForCausalLM (XLMProphetNet model)
- xlm-roberta β XLMRobertaForCausalLM (XLM-RoBERTa model)
- xlnet β XLNetLMHeadModel (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForCausalLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCausalLM.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForCausalLM.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForCausalLM.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForMaskedLM
This is a generic model class that will be instantiated as one of the model classes of the library (with a masked language modeling head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertForMaskedLM (ALBERT model)
- BartConfig configuration class: BartForConditionalGeneration (BART model)
- BertConfig configuration class: BertForMaskedLM (BERT model)
- BigBirdConfig configuration class: BigBirdForMaskedLM (BigBird model)
- CamembertConfig configuration class: CamembertForMaskedLM (CamemBERT model)
- ConvBertConfig configuration class: ConvBertForMaskedLM (ConvBERT model)
- DebertaConfig configuration class: DebertaForMaskedLM (DeBERTa model)
- DebertaV2Config configuration class: DebertaV2ForMaskedLM (DeBERTa-v2 model)
- DistilBertConfig configuration class: DistilBertForMaskedLM (DistilBERT model)
- ElectraConfig configuration class: ElectraForMaskedLM (ELECTRA model)
- FNetConfig configuration class: FNetForMaskedLM (FNet model)
- FlaubertConfig configuration class: FlaubertWithLMHeadModel (FlauBERT model)
- FunnelConfig configuration class: FunnelForMaskedLM (Funnel Transformer model)
- IBertConfig configuration class: IBertForMaskedLM (I-BERT model)
- LayoutLMConfig configuration class: LayoutLMForMaskedLM (LayoutLM model)
- LongformerConfig configuration class: LongformerForMaskedLM (Longformer model)
- MBartConfig configuration class: MBartForConditionalGeneration (mBART model)
- MPNetConfig configuration class: MPNetForMaskedLM (MPNet model)
- MegatronBertConfig configuration class: MegatronBertForMaskedLM (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForMaskedLM (MobileBERT model)
- PerceiverConfig configuration class: PerceiverForMaskedLM (Perceiver model)
- QDQBertConfig configuration class: QDQBertForMaskedLM (QDQBert model)
- ReformerConfig configuration class: ReformerForMaskedLM (Reformer model)
- RemBertConfig configuration class: RemBertForMaskedLM (RemBERT model)
- RoFormerConfig configuration class: RoFormerForMaskedLM (RoFormer model)
- RobertaConfig configuration class: RobertaForMaskedLM (RoBERTa model)
- SqueezeBertConfig configuration class: SqueezeBertForMaskedLM (SqueezeBERT model)
- TapasConfig configuration class: TapasForMaskedLM (TAPAS model)
- Wav2Vec2Config configuration class:
Wav2Vec2ForMaskedLM
(Wav2Vec2 model) - XLMConfig configuration class: XLMWithLMHeadModel (XLM model)
- XLMRobertaConfig configuration class: XLMRobertaForMaskedLM (XLM-RoBERTa model)
Instantiates one of the model classes of the library (with a masked language modeling head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedLM
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForMaskedLM.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a masked language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertForMaskedLM (ALBERT model)
- bart β BartForConditionalGeneration (BART model)
- bert β BertForMaskedLM (BERT model)
- big_bird β BigBirdForMaskedLM (BigBird model)
- camembert β CamembertForMaskedLM (CamemBERT model)
- convbert β ConvBertForMaskedLM (ConvBERT model)
- deberta β DebertaForMaskedLM (DeBERTa model)
- deberta-v2 β DebertaV2ForMaskedLM (DeBERTa-v2 model)
- distilbert β DistilBertForMaskedLM (DistilBERT model)
- electra β ElectraForMaskedLM (ELECTRA model)
- flaubert β FlaubertWithLMHeadModel (FlauBERT model)
- fnet β FNetForMaskedLM (FNet model)
- funnel β FunnelForMaskedLM (Funnel Transformer model)
- ibert β IBertForMaskedLM (I-BERT model)
- layoutlm β LayoutLMForMaskedLM (LayoutLM model)
- longformer β LongformerForMaskedLM (Longformer model)
- mbart β MBartForConditionalGeneration (mBART model)
- megatron-bert β MegatronBertForMaskedLM (MegatronBert model)
- mobilebert β MobileBertForMaskedLM (MobileBERT model)
- mpnet β MPNetForMaskedLM (MPNet model)
- perceiver β PerceiverForMaskedLM (Perceiver model)
- qdqbert β QDQBertForMaskedLM (QDQBert model)
- reformer β ReformerForMaskedLM (Reformer model)
- rembert β RemBertForMaskedLM (RemBERT model)
- roberta β RobertaForMaskedLM (RoBERTa model)
- roformer β RoFormerForMaskedLM (RoFormer model)
- squeezebert β SqueezeBertForMaskedLM (SqueezeBERT model)
- tapas β TapasForMaskedLM (TAPAS model)
- wav2vec2 β
Wav2Vec2ForMaskedLM
(Wav2Vec2 model) - xlm β XLMWithLMHeadModel (XLM model)
- xlm-roberta β XLMRobertaForMaskedLM (XLM-RoBERTa model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMaskedLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMaskedLM.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForMaskedLM.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForMaskedLM.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForSeq2SeqLM
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence-to-sequence language modeling head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- BartConfig configuration class: BartForConditionalGeneration (BART model)
- BigBirdPegasusConfig configuration class: BigBirdPegasusForConditionalGeneration (BigBirdPegasus model)
- BlenderbotConfig configuration class: BlenderbotForConditionalGeneration (Blenderbot model)
- BlenderbotSmallConfig configuration class: BlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
- EncoderDecoderConfig configuration class: EncoderDecoderModel (Encoder decoder model)
- FSMTConfig configuration class: FSMTForConditionalGeneration (FairSeq Machine-Translation model)
- LEDConfig configuration class: LEDForConditionalGeneration (LED model)
- M2M100Config configuration class: M2M100ForConditionalGeneration (M2M100 model)
- MBartConfig configuration class: MBartForConditionalGeneration (mBART model)
- MT5Config configuration class: MT5ForConditionalGeneration (mT5 model)
- MarianConfig configuration class: MarianMTModel (Marian model)
- PegasusConfig configuration class: PegasusForConditionalGeneration (Pegasus model)
- ProphetNetConfig configuration class: ProphetNetForConditionalGeneration (ProphetNet model)
- T5Config configuration class: T5ForConditionalGeneration (T5 model)
- XLMProphetNetConfig configuration class: XLMProphetNetForConditionalGeneration (XLMProphetNet model)
Instantiates one of the model classes of the library (with a sequence-to-sequence language modeling head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('t5-base')
>>> model = AutoModelForSeq2SeqLM.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a sequence-to-sequence language modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- bart β BartForConditionalGeneration (BART model)
- bigbird_pegasus β BigBirdPegasusForConditionalGeneration (BigBirdPegasus model)
- blenderbot β BlenderbotForConditionalGeneration (Blenderbot model)
- blenderbot-small β BlenderbotSmallForConditionalGeneration (BlenderbotSmall model)
- encoder-decoder β EncoderDecoderModel (Encoder decoder model)
- fsmt β FSMTForConditionalGeneration (FairSeq Machine-Translation model)
- led β LEDForConditionalGeneration (LED model)
- m2m_100 β M2M100ForConditionalGeneration (M2M100 model)
- marian β MarianMTModel (Marian model)
- mbart β MBartForConditionalGeneration (mBART model)
- mt5 β MT5ForConditionalGeneration (mT5 model)
- pegasus β PegasusForConditionalGeneration (Pegasus model)
- prophetnet β ProphetNetForConditionalGeneration (ProphetNet model)
- t5 β T5ForConditionalGeneration (T5 model)
- xlm-prophetnet β XLMProphetNetForConditionalGeneration (XLMProphetNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSeq2SeqLM
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSeq2SeqLM.from_pretrained('t5-base')
>>> # Update configuration during loading
>>> model = AutoModelForSeq2SeqLM.from_pretrained('t5-base', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/t5_tf_model_config.json')
>>> model = AutoModelForSeq2SeqLM.from_pretrained('./tf_model/t5_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForSequenceClassification
This is a generic model class that will be instantiated as one of the model classes of the library (with a sequence classification head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertForSequenceClassification (ALBERT model)
- BartConfig configuration class: BartForSequenceClassification (BART model)
- BertConfig configuration class: BertForSequenceClassification (BERT model)
- BigBirdConfig configuration class: BigBirdForSequenceClassification (BigBird model)
- BigBirdPegasusConfig configuration class: BigBirdPegasusForSequenceClassification (BigBirdPegasus model)
- CTRLConfig configuration class: CTRLForSequenceClassification (CTRL model)
- CamembertConfig configuration class: CamembertForSequenceClassification (CamemBERT model)
- CanineConfig configuration class: CanineForSequenceClassification (Canine model)
- ConvBertConfig configuration class: ConvBertForSequenceClassification (ConvBERT model)
- DebertaConfig configuration class: DebertaForSequenceClassification (DeBERTa model)
- DebertaV2Config configuration class: DebertaV2ForSequenceClassification (DeBERTa-v2 model)
- DistilBertConfig configuration class: DistilBertForSequenceClassification (DistilBERT model)
- ElectraConfig configuration class: ElectraForSequenceClassification (ELECTRA model)
- FNetConfig configuration class: FNetForSequenceClassification (FNet model)
- FlaubertConfig configuration class: FlaubertForSequenceClassification (FlauBERT model)
- FunnelConfig configuration class: FunnelForSequenceClassification (Funnel Transformer model)
- GPT2Config configuration class: GPT2ForSequenceClassification (OpenAI GPT-2 model)
- GPTJConfig configuration class: GPTJForSequenceClassification (GPT-J model)
- GPTNeoConfig configuration class: GPTNeoForSequenceClassification (GPT Neo model)
- IBertConfig configuration class: IBertForSequenceClassification (I-BERT model)
- LEDConfig configuration class: LEDForSequenceClassification (LED model)
- LayoutLMConfig configuration class: LayoutLMForSequenceClassification (LayoutLM model)
- LayoutLMv2Config configuration class: LayoutLMv2ForSequenceClassification (LayoutLMv2 model)
- LongformerConfig configuration class: LongformerForSequenceClassification (Longformer model)
- MBartConfig configuration class: MBartForSequenceClassification (mBART model)
- MPNetConfig configuration class: MPNetForSequenceClassification (MPNet model)
- MegatronBertConfig configuration class: MegatronBertForSequenceClassification (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForSequenceClassification (MobileBERT model)
- OpenAIGPTConfig configuration class: OpenAIGPTForSequenceClassification (OpenAI GPT model)
- PerceiverConfig configuration class: PerceiverForSequenceClassification (Perceiver model)
- QDQBertConfig configuration class: QDQBertForSequenceClassification (QDQBert model)
- ReformerConfig configuration class: ReformerForSequenceClassification (Reformer model)
- RemBertConfig configuration class: RemBertForSequenceClassification (RemBERT model)
- RoFormerConfig configuration class: RoFormerForSequenceClassification (RoFormer model)
- RobertaConfig configuration class: RobertaForSequenceClassification (RoBERTa model)
- SqueezeBertConfig configuration class: SqueezeBertForSequenceClassification (SqueezeBERT model)
- TapasConfig configuration class: TapasForSequenceClassification (TAPAS model)
- TransfoXLConfig configuration class: TransfoXLForSequenceClassification (Transformer-XL model)
- XLMConfig configuration class: XLMForSequenceClassification (XLM model)
- XLMRobertaConfig configuration class: XLMRobertaForSequenceClassification (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetForSequenceClassification (XLNet model)
Instantiates one of the model classes of the library (with a sequence classification head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForSequenceClassification
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForSequenceClassification.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a sequence classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertForSequenceClassification (ALBERT model)
- bart β BartForSequenceClassification (BART model)
- bert β BertForSequenceClassification (BERT model)
- big_bird β BigBirdForSequenceClassification (BigBird model)
- bigbird_pegasus β BigBirdPegasusForSequenceClassification (BigBirdPegasus model)
- camembert β CamembertForSequenceClassification (CamemBERT model)
- canine β CanineForSequenceClassification (Canine model)
- convbert β ConvBertForSequenceClassification (ConvBERT model)
- ctrl β CTRLForSequenceClassification (CTRL model)
- deberta β DebertaForSequenceClassification (DeBERTa model)
- deberta-v2 β DebertaV2ForSequenceClassification (DeBERTa-v2 model)
- distilbert β DistilBertForSequenceClassification (DistilBERT model)
- electra β ElectraForSequenceClassification (ELECTRA model)
- flaubert β FlaubertForSequenceClassification (FlauBERT model)
- fnet β FNetForSequenceClassification (FNet model)
- funnel β FunnelForSequenceClassification (Funnel Transformer model)
- gpt2 β GPT2ForSequenceClassification (OpenAI GPT-2 model)
- gpt_neo β GPTNeoForSequenceClassification (GPT Neo model)
- gptj β GPTJForSequenceClassification (GPT-J model)
- ibert β IBertForSequenceClassification (I-BERT model)
- layoutlm β LayoutLMForSequenceClassification (LayoutLM model)
- layoutlmv2 β LayoutLMv2ForSequenceClassification (LayoutLMv2 model)
- led β LEDForSequenceClassification (LED model)
- longformer β LongformerForSequenceClassification (Longformer model)
- mbart β MBartForSequenceClassification (mBART model)
- megatron-bert β MegatronBertForSequenceClassification (MegatronBert model)
- mobilebert β MobileBertForSequenceClassification (MobileBERT model)
- mpnet β MPNetForSequenceClassification (MPNet model)
- openai-gpt β OpenAIGPTForSequenceClassification (OpenAI GPT model)
- perceiver β PerceiverForSequenceClassification (Perceiver model)
- qdqbert β QDQBertForSequenceClassification (QDQBert model)
- reformer β ReformerForSequenceClassification (Reformer model)
- rembert β RemBertForSequenceClassification (RemBERT model)
- roberta β RobertaForSequenceClassification (RoBERTa model)
- roformer β RoFormerForSequenceClassification (RoFormer model)
- squeezebert β SqueezeBertForSequenceClassification (SqueezeBERT model)
- tapas β TapasForSequenceClassification (TAPAS model)
- transfo-xl β TransfoXLForSequenceClassification (Transformer-XL model)
- xlm β XLMForSequenceClassification (XLM model)
- xlm-roberta β XLMRobertaForSequenceClassification (XLM-RoBERTa model)
- xlnet β XLNetForSequenceClassification (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForSequenceClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForSequenceClassification.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForSequenceClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForMultipleChoice
This is a generic model class that will be instantiated as one of the model classes of the library (with a multiple choice head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertForMultipleChoice (ALBERT model)
- BertConfig configuration class: BertForMultipleChoice (BERT model)
- BigBirdConfig configuration class: BigBirdForMultipleChoice (BigBird model)
- CamembertConfig configuration class: CamembertForMultipleChoice (CamemBERT model)
- CanineConfig configuration class: CanineForMultipleChoice (Canine model)
- ConvBertConfig configuration class: ConvBertForMultipleChoice (ConvBERT model)
- DistilBertConfig configuration class: DistilBertForMultipleChoice (DistilBERT model)
- ElectraConfig configuration class: ElectraForMultipleChoice (ELECTRA model)
- FNetConfig configuration class: FNetForMultipleChoice (FNet model)
- FlaubertConfig configuration class: FlaubertForMultipleChoice (FlauBERT model)
- FunnelConfig configuration class: FunnelForMultipleChoice (Funnel Transformer model)
- IBertConfig configuration class: IBertForMultipleChoice (I-BERT model)
- LongformerConfig configuration class: LongformerForMultipleChoice (Longformer model)
- MPNetConfig configuration class: MPNetForMultipleChoice (MPNet model)
- MegatronBertConfig configuration class: MegatronBertForMultipleChoice (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForMultipleChoice (MobileBERT model)
- QDQBertConfig configuration class: QDQBertForMultipleChoice (QDQBert model)
- RemBertConfig configuration class: RemBertForMultipleChoice (RemBERT model)
- RoFormerConfig configuration class: RoFormerForMultipleChoice (RoFormer model)
- RobertaConfig configuration class: RobertaForMultipleChoice (RoBERTa model)
- SqueezeBertConfig configuration class: SqueezeBertForMultipleChoice (SqueezeBERT model)
- XLMConfig configuration class: XLMForMultipleChoice (XLM model)
- XLMRobertaConfig configuration class: XLMRobertaForMultipleChoice (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetForMultipleChoice (XLNet model)
Instantiates one of the model classes of the library (with a multiple choice head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForMultipleChoice
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForMultipleChoice.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a multiple choice head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertForMultipleChoice (ALBERT model)
- bert β BertForMultipleChoice (BERT model)
- big_bird β BigBirdForMultipleChoice (BigBird model)
- camembert β CamembertForMultipleChoice (CamemBERT model)
- canine β CanineForMultipleChoice (Canine model)
- convbert β ConvBertForMultipleChoice (ConvBERT model)
- distilbert β DistilBertForMultipleChoice (DistilBERT model)
- electra β ElectraForMultipleChoice (ELECTRA model)
- flaubert β FlaubertForMultipleChoice (FlauBERT model)
- fnet β FNetForMultipleChoice (FNet model)
- funnel β FunnelForMultipleChoice (Funnel Transformer model)
- ibert β IBertForMultipleChoice (I-BERT model)
- longformer β LongformerForMultipleChoice (Longformer model)
- megatron-bert β MegatronBertForMultipleChoice (MegatronBert model)
- mobilebert β MobileBertForMultipleChoice (MobileBERT model)
- mpnet β MPNetForMultipleChoice (MPNet model)
- qdqbert β QDQBertForMultipleChoice (QDQBert model)
- rembert β RemBertForMultipleChoice (RemBERT model)
- roberta β RobertaForMultipleChoice (RoBERTa model)
- roformer β RoFormerForMultipleChoice (RoFormer model)
- squeezebert β SqueezeBertForMultipleChoice (SqueezeBERT model)
- xlm β XLMForMultipleChoice (XLM model)
- xlm-roberta β XLMRobertaForMultipleChoice (XLM-RoBERTa model)
- xlnet β XLNetForMultipleChoice (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForMultipleChoice
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForMultipleChoice.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForMultipleChoice.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForMultipleChoice.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForNextSentencePrediction
This is a generic model class that will be instantiated as one of the model classes of the library (with a next sentence prediction head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- BertConfig configuration class: BertForNextSentencePrediction (BERT model)
- FNetConfig configuration class: FNetForNextSentencePrediction (FNet model)
- MegatronBertConfig configuration class: MegatronBertForNextSentencePrediction (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForNextSentencePrediction (MobileBERT model)
- QDQBertConfig configuration class: QDQBertForNextSentencePrediction (QDQBert model)
Instantiates one of the model classes of the library (with a next sentence prediction head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForNextSentencePrediction.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a next sentence prediction head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- bert β BertForNextSentencePrediction (BERT model)
- fnet β FNetForNextSentencePrediction (FNet model)
- megatron-bert β MegatronBertForNextSentencePrediction (MegatronBert model)
- mobilebert β MobileBertForNextSentencePrediction (MobileBERT model)
- qdqbert β QDQBertForNextSentencePrediction (QDQBert model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForNextSentencePrediction
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForNextSentencePrediction.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForNextSentencePrediction.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForNextSentencePrediction.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForTokenClassification
This is a generic model class that will be instantiated as one of the model classes of the library (with a token classification head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertForTokenClassification (ALBERT model)
- BertConfig configuration class: BertForTokenClassification (BERT model)
- BigBirdConfig configuration class: BigBirdForTokenClassification (BigBird model)
- CamembertConfig configuration class: CamembertForTokenClassification (CamemBERT model)
- CanineConfig configuration class: CanineForTokenClassification (Canine model)
- ConvBertConfig configuration class: ConvBertForTokenClassification (ConvBERT model)
- DebertaConfig configuration class: DebertaForTokenClassification (DeBERTa model)
- DebertaV2Config configuration class: DebertaV2ForTokenClassification (DeBERTa-v2 model)
- DistilBertConfig configuration class: DistilBertForTokenClassification (DistilBERT model)
- ElectraConfig configuration class: ElectraForTokenClassification (ELECTRA model)
- FNetConfig configuration class: FNetForTokenClassification (FNet model)
- FlaubertConfig configuration class: FlaubertForTokenClassification (FlauBERT model)
- FunnelConfig configuration class: FunnelForTokenClassification (Funnel Transformer model)
- GPT2Config configuration class: GPT2ForTokenClassification (OpenAI GPT-2 model)
- IBertConfig configuration class: IBertForTokenClassification (I-BERT model)
- LayoutLMConfig configuration class: LayoutLMForTokenClassification (LayoutLM model)
- LayoutLMv2Config configuration class: LayoutLMv2ForTokenClassification (LayoutLMv2 model)
- LongformerConfig configuration class: LongformerForTokenClassification (Longformer model)
- MPNetConfig configuration class: MPNetForTokenClassification (MPNet model)
- MegatronBertConfig configuration class: MegatronBertForTokenClassification (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForTokenClassification (MobileBERT model)
- QDQBertConfig configuration class: QDQBertForTokenClassification (QDQBert model)
- RemBertConfig configuration class: RemBertForTokenClassification (RemBERT model)
- RoFormerConfig configuration class: RoFormerForTokenClassification (RoFormer model)
- RobertaConfig configuration class: RobertaForTokenClassification (RoBERTa model)
- SqueezeBertConfig configuration class: SqueezeBertForTokenClassification (SqueezeBERT model)
- XLMConfig configuration class: XLMForTokenClassification (XLM model)
- XLMRobertaConfig configuration class: XLMRobertaForTokenClassification (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetForTokenClassification (XLNet model)
Instantiates one of the model classes of the library (with a token classification head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForTokenClassification
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForTokenClassification.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a token classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertForTokenClassification (ALBERT model)
- bert β BertForTokenClassification (BERT model)
- big_bird β BigBirdForTokenClassification (BigBird model)
- camembert β CamembertForTokenClassification (CamemBERT model)
- canine β CanineForTokenClassification (Canine model)
- convbert β ConvBertForTokenClassification (ConvBERT model)
- deberta β DebertaForTokenClassification (DeBERTa model)
- deberta-v2 β DebertaV2ForTokenClassification (DeBERTa-v2 model)
- distilbert β DistilBertForTokenClassification (DistilBERT model)
- electra β ElectraForTokenClassification (ELECTRA model)
- flaubert β FlaubertForTokenClassification (FlauBERT model)
- fnet β FNetForTokenClassification (FNet model)
- funnel β FunnelForTokenClassification (Funnel Transformer model)
- gpt2 β GPT2ForTokenClassification (OpenAI GPT-2 model)
- ibert β IBertForTokenClassification (I-BERT model)
- layoutlm β LayoutLMForTokenClassification (LayoutLM model)
- layoutlmv2 β LayoutLMv2ForTokenClassification (LayoutLMv2 model)
- longformer β LongformerForTokenClassification (Longformer model)
- megatron-bert β MegatronBertForTokenClassification (MegatronBert model)
- mobilebert β MobileBertForTokenClassification (MobileBERT model)
- mpnet β MPNetForTokenClassification (MPNet model)
- qdqbert β QDQBertForTokenClassification (QDQBert model)
- rembert β RemBertForTokenClassification (RemBERT model)
- roberta β RobertaForTokenClassification (RoBERTa model)
- roformer β RoFormerForTokenClassification (RoFormer model)
- squeezebert β SqueezeBertForTokenClassification (SqueezeBERT model)
- xlm β XLMForTokenClassification (XLM model)
- xlm-roberta β XLMRobertaForTokenClassification (XLM-RoBERTa model)
- xlnet β XLNetForTokenClassification (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForTokenClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTokenClassification.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForTokenClassification.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForTokenClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForQuestionAnswering
This is a generic model class that will be instantiated as one of the model classes of the library (with a question answering head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- AlbertConfig configuration class: AlbertForQuestionAnswering (ALBERT model)
- BartConfig configuration class: BartForQuestionAnswering (BART model)
- BertConfig configuration class: BertForQuestionAnswering (BERT model)
- BigBirdConfig configuration class: BigBirdForQuestionAnswering (BigBird model)
- BigBirdPegasusConfig configuration class: BigBirdPegasusForQuestionAnswering (BigBirdPegasus model)
- CamembertConfig configuration class: CamembertForQuestionAnswering (CamemBERT model)
- CanineConfig configuration class: CanineForQuestionAnswering (Canine model)
- ConvBertConfig configuration class: ConvBertForQuestionAnswering (ConvBERT model)
- DebertaConfig configuration class: DebertaForQuestionAnswering (DeBERTa model)
- DebertaV2Config configuration class: DebertaV2ForQuestionAnswering (DeBERTa-v2 model)
- DistilBertConfig configuration class: DistilBertForQuestionAnswering (DistilBERT model)
- ElectraConfig configuration class: ElectraForQuestionAnswering (ELECTRA model)
- FNetConfig configuration class: FNetForQuestionAnswering (FNet model)
- FlaubertConfig configuration class: FlaubertForQuestionAnsweringSimple (FlauBERT model)
- FunnelConfig configuration class: FunnelForQuestionAnswering (Funnel Transformer model)
- GPTJConfig configuration class: GPTJForQuestionAnswering (GPT-J model)
- IBertConfig configuration class: IBertForQuestionAnswering (I-BERT model)
- LEDConfig configuration class: LEDForQuestionAnswering (LED model)
- LayoutLMv2Config configuration class: LayoutLMv2ForQuestionAnswering (LayoutLMv2 model)
- LongformerConfig configuration class: LongformerForQuestionAnswering (Longformer model)
- LxmertConfig configuration class: LxmertForQuestionAnswering (LXMERT model)
- MBartConfig configuration class: MBartForQuestionAnswering (mBART model)
- MPNetConfig configuration class: MPNetForQuestionAnswering (MPNet model)
- MegatronBertConfig configuration class: MegatronBertForQuestionAnswering (MegatronBert model)
- MobileBertConfig configuration class: MobileBertForQuestionAnswering (MobileBERT model)
- QDQBertConfig configuration class: QDQBertForQuestionAnswering (QDQBert model)
- ReformerConfig configuration class: ReformerForQuestionAnswering (Reformer model)
- RemBertConfig configuration class: RemBertForQuestionAnswering (RemBERT model)
- RoFormerConfig configuration class: RoFormerForQuestionAnswering (RoFormer model)
- RobertaConfig configuration class: RobertaForQuestionAnswering (RoBERTa model)
- SplinterConfig configuration class: SplinterForQuestionAnswering (Splinter model)
- SqueezeBertConfig configuration class: SqueezeBertForQuestionAnswering (SqueezeBERT model)
- XLMConfig configuration class: XLMForQuestionAnsweringSimple (XLM model)
- XLMRobertaConfig configuration class: XLMRobertaForQuestionAnswering (XLM-RoBERTa model)
- XLNetConfig configuration class: XLNetForQuestionAnsweringSimple (XLNet model)
Instantiates one of the model classes of the library (with a question answering head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForQuestionAnswering.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- albert β AlbertForQuestionAnswering (ALBERT model)
- bart β BartForQuestionAnswering (BART model)
- bert β BertForQuestionAnswering (BERT model)
- big_bird β BigBirdForQuestionAnswering (BigBird model)
- bigbird_pegasus β BigBirdPegasusForQuestionAnswering (BigBirdPegasus model)
- camembert β CamembertForQuestionAnswering (CamemBERT model)
- canine β CanineForQuestionAnswering (Canine model)
- convbert β ConvBertForQuestionAnswering (ConvBERT model)
- deberta β DebertaForQuestionAnswering (DeBERTa model)
- deberta-v2 β DebertaV2ForQuestionAnswering (DeBERTa-v2 model)
- distilbert β DistilBertForQuestionAnswering (DistilBERT model)
- electra β ElectraForQuestionAnswering (ELECTRA model)
- flaubert β FlaubertForQuestionAnsweringSimple (FlauBERT model)
- fnet β FNetForQuestionAnswering (FNet model)
- funnel β FunnelForQuestionAnswering (Funnel Transformer model)
- gptj β GPTJForQuestionAnswering (GPT-J model)
- ibert β IBertForQuestionAnswering (I-BERT model)
- layoutlmv2 β LayoutLMv2ForQuestionAnswering (LayoutLMv2 model)
- led β LEDForQuestionAnswering (LED model)
- longformer β LongformerForQuestionAnswering (Longformer model)
- lxmert β LxmertForQuestionAnswering (LXMERT model)
- mbart β MBartForQuestionAnswering (mBART model)
- megatron-bert β MegatronBertForQuestionAnswering (MegatronBert model)
- mobilebert β MobileBertForQuestionAnswering (MobileBERT model)
- mpnet β MPNetForQuestionAnswering (MPNet model)
- qdqbert β QDQBertForQuestionAnswering (QDQBert model)
- reformer β ReformerForQuestionAnswering (Reformer model)
- rembert β RemBertForQuestionAnswering (RemBERT model)
- roberta β RobertaForQuestionAnswering (RoBERTa model)
- roformer β RoFormerForQuestionAnswering (RoFormer model)
- splinter β SplinterForQuestionAnswering (Splinter model)
- squeezebert β SqueezeBertForQuestionAnswering (SqueezeBERT model)
- xlm β XLMForQuestionAnsweringSimple (XLM model)
- xlm-roberta β XLMRobertaForQuestionAnswering (XLM-RoBERTa model)
- xlnet β XLNetForQuestionAnsweringSimple (XLNet model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForQuestionAnswering.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForQuestionAnswering.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForQuestionAnswering.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForTableQuestionAnswering
This is a generic model class that will be instantiated as one of the model classes of the library (with a table question answering head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- TapasConfig configuration class: TapasForQuestionAnswering (TAPAS model)
Instantiates one of the model classes of the library (with a table question answering head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('google/tapas-base-finetuned-wtq')
>>> model = AutoModelForTableQuestionAnswering.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a table question answering head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- tapas β TapasForQuestionAnswering (TAPAS model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForTableQuestionAnswering
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForTableQuestionAnswering.from_pretrained('google/tapas-base-finetuned-wtq')
>>> # Update configuration during loading
>>> model = AutoModelForTableQuestionAnswering.from_pretrained('google/tapas-base-finetuned-wtq', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/tapas_tf_model_config.json')
>>> model = AutoModelForTableQuestionAnswering.from_pretrained('./tf_model/tapas_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForImageClassification
This is a generic model class that will be instantiated as one of the model classes of the library (with a image classification head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- BeitConfig configuration class: BeitForImageClassification (BEiT model)
- DeiTConfig configuration class: DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model)
- ImageGPTConfig configuration class: ImageGPTForImageClassification (ImageGPT model)
- PerceiverConfig configuration class: PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model)
- SegformerConfig configuration class: SegformerForImageClassification (SegFormer model)
- ViTConfig configuration class: ViTForImageClassification (ViT model)
Instantiates one of the model classes of the library (with a image classification head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForImageClassification
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForImageClassification.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a image classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- beit β BeitForImageClassification (BEiT model)
- deit β DeiTForImageClassification or DeiTForImageClassificationWithTeacher (DeiT model)
- imagegpt β ImageGPTForImageClassification (ImageGPT model)
- perceiver β PerceiverForImageClassificationLearned or PerceiverForImageClassificationFourier or PerceiverForImageClassificationConvProcessing (Perceiver model)
- segformer β SegformerForImageClassification (SegFormer model)
- vit β ViTForImageClassification (ViT model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForImageClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForImageClassification.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForImageClassification.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForImageClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForVision2Seq
This is a generic model class that will be instantiated as one of the model classes of the library (with a vision-to-text modeling head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- VisionEncoderDecoderConfig configuration class: VisionEncoderDecoderModel (Vision Encoder decoder model)
Instantiates one of the model classes of the library (with a vision-to-text modeling head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForVision2Seq
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForVision2Seq.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a vision-to-text modeling head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- vision-encoder-decoder β VisionEncoderDecoderModel (Vision Encoder decoder model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForVision2Seq
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForVision2Seq.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForVision2Seq.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForVision2Seq.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForAudioClassification
This is a generic model class that will be instantiated as one of the model classes of the library (with a audio classification head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- HubertConfig configuration class: HubertForSequenceClassification (Hubert model)
- SEWConfig configuration class: SEWForSequenceClassification (SEW model)
- SEWDConfig configuration class: SEWDForSequenceClassification (SEW-D model)
- UniSpeechConfig configuration class: UniSpeechForSequenceClassification (UniSpeech model)
- UniSpeechSatConfig configuration class: UniSpeechSatForSequenceClassification (UniSpeechSat model)
- Wav2Vec2Config configuration class: Wav2Vec2ForSequenceClassification (Wav2Vec2 model)
Instantiates one of the model classes of the library (with a audio classification head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioClassification
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForAudioClassification.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a audio classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- hubert β HubertForSequenceClassification (Hubert model)
- sew β SEWForSequenceClassification (SEW model)
- sew-d β SEWDForSequenceClassification (SEW-D model)
- unispeech β UniSpeechForSequenceClassification (UniSpeech model)
- unispeech-sat β UniSpeechSatForSequenceClassification (UniSpeechSat model)
- wav2vec2 β Wav2Vec2ForSequenceClassification (Wav2Vec2 model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForAudioClassification
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForAudioClassification.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForAudioClassification.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForAudioClassification.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)
AutoModelForCTC
This is a generic model class that will be instantiated as one of the model classes of the library (with a connectionist temporal classification head) when created
with the from_pretrained()
class method or the
from_config()
class method.
This class cannot be instantiated directly using __init__()
(throws an error).
( **kwargs )
Parameters
-
config (PretrainedConfig) —
The model class to instantiate is selected based on the configuration class:
- HubertConfig configuration class: HubertForCTC (Hubert model)
- SEWConfig configuration class: SEWForCTC (SEW model)
- SEWDConfig configuration class: SEWDForCTC (SEW-D model)
- UniSpeechConfig configuration class: UniSpeechForCTC (UniSpeech model)
- UniSpeechSatConfig configuration class: UniSpeechSatForCTC (UniSpeechSat model)
- Wav2Vec2Config configuration class: Wav2Vec2ForCTC (Wav2Vec2 model)
Instantiates one of the model classes of the library (with a connectionist temporal classification head) from a configuration.
Note:
Loading a model from its configuration file does not load the model weights. It only affects the
modelβs configuration. Use from_pretrained()
to load the model
weights.
Examples:
>>> from transformers import AutoConfig, AutoModelForCTC
>>> # Download configuration from huggingface.co and cache.
>>> config = AutoConfig.from_pretrained('bert-base-cased')
>>> model = AutoModelForCTC.from_config(config)
( *model_args **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
) — Can be either:- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
bert-base-uncased
, or namespaced under a user or organization name, likedbmdz/bert-base-german-cased
. - A path to a directory containing model weights saved using
save_pretrained(), e.g.,
./my_model_directory/
. - A path or url to a tensorflow index checkpoint file (e.g,
./tf_model/model.ckpt.index
). In this case,from_tf
should be set toTrue
and a configuration object should be provided asconfig
argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
- A string, the model id of a pretrained model hosted inside a model repo on huggingface.co.
Valid model ids can be located at the root-level, like
-
model_args (additional positional arguments, optional) —
Will be passed along to the underlying model
__init__()
method. -
config (PretrainedConfig, optional) —
Configuration for the model to use instead of an automatically loaded configuration. Configuration can
be automatically loaded when:
- The model is a model provided by the library (loaded with the model id string of a pretrained model).
- The model was saved using save_pretrained() and is reloaded by supplying the save directory.
- The model is loaded by supplying a local directory as
pretrained_model_name_or_path
and a configuration JSON file named config.json is found in the directory.
-
state_dict (Dict[str, torch.Tensor], optional) —
A state dictionary to use instead of a state dictionary loaded from saved weights file.
This option can be used if you want to create a model from a pretrained configuration but load your own weights. In this case though, you should check if using save_pretrained() and from_pretrained() is not a simpler option.
-
cache_dir (
str
oros.PathLike
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
from_tf (
bool
, optional, defaults toFalse
) — Load the model weights from a TensorFlow checkpoint save file (see docstring ofpretrained_model_name_or_path
argument). -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether ot not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (e.g., not try downloading the model). -
revision(
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git. -
trust_remote_code (
bool
, optional, defaults toFalse
) — Whether or not to allow for custom models defined on the Hub in their own modeling files. This option should only be set toTrue
for repositories you trust and in which you have read the code, as it will execute code present on the Hub on your local machine. -
kwargs (additional keyword arguments, optional) —
Can be used to update the configuration object (after it being loaded) and initiate the model (e.g.,
output_attentions=True
). Behaves differently depending on whether aconfig
is provided or automatically loaded:- If a configuration is provided with
config
,**kwargs
will be directly passed to the underlying model’s__init__
method (we assume all relevant updates to the configuration have already been done) - If a configuration is not provided,
kwargs
will be first passed to the configuration class initialization function (from_pretrained()). Each key ofkwargs
that corresponds to a configuration attribute will be used to override said attribute with the suppliedkwargs
value. Remaining keys that do not correspond to any configuration attribute will be passed to the underlying model’s__init__
function.
- If a configuration is provided with
Instantiate one of the model classes of the library (with a connectionist temporal classification head) from a pretrained model.
The model class to instantiate is selected based on the model_type
property of the config object (either
passed as an argument or loaded from pretrained_model_name_or_path
if possible), or when itβs missing,
by falling back to using pattern matching on pretrained_model_name_or_path
:
- hubert β HubertForCTC (Hubert model)
- sew β SEWForCTC (SEW model)
- sew-d β SEWDForCTC (SEW-D model)
- unispeech β UniSpeechForCTC (UniSpeech model)
- unispeech-sat β UniSpeechSatForCTC (UniSpeechSat model)
- wav2vec2 β Wav2Vec2ForCTC (Wav2Vec2 model)
The model is set in evaluation mode by default using model.eval()
(so for instance, dropout modules are
deactivated). To train the model, you should first set it back in training mode with model.train()
Examples:
>>> from transformers import AutoConfig, AutoModelForCTC
>>> # Download model and configuration from huggingface.co and cache.
>>> model = AutoModelForCTC.from_pretrained('bert-base-cased')
>>> # Update configuration during loading
>>> model = AutoModelForCTC.from_pretrained('bert-base-cased', output_attentions=True)
>>> model.config.output_attentions
True
>>> # Loading from a TF checkpoint file instead of a PyTorch model (slower)
>>> config = AutoConfig.from_pretrained('./tf_model/bert_tf_model_config.json')
>>> model = AutoModelForCTC.from_pretrained('./tf_model/bert_tf_checkpoint.ckpt.index', from_tf=True, config=config)